Posts tagged ‘API’

September 18, 2014 10:30 am

company logo

Qualcomm (NASDAQ: QCOM) has announced the availability of the LTE Broadcast SDK for Qualcomm Snapdragon processors. Developers will be able to access a common API that can be used in all regions around the world.

Designing a More Delightful Wistia Account Dashboard

September 10, 2014 2:20 pm


If you’ve been updating your Wistia settings in the Account Dashboard lately, you might have noticed it got a little facelift. Well, it’s been a while in the making.

When I joined the Wistia design team in February, one of our customers’ nagging frustrations was making simple account updates. It was such a bummer to hear! So, the account dashboard was one of the very first things I took on.

Aaron Walter describes a Maslow-inspired hierarchy of user needs in which a design should be functional, then reliable, then usable, and then finally pleasurable, or, as we Wistians like to say, delightful.

Looking at our old dashboard, I felt that it was functional, reliable, and *mostly* usable. I mean, the settings displayed and updated as they should, but couldn’t we make this experience more usable? Delightful, even?

First, I had to familiarize myself with our customers’ needs and the current design. This was tough in itself. I was redesigning an interface that was very new to me, for customers I was still getting to know. We spent about two months iterating, discussing, and scribbling. Finally, we reached a final design for the entire account dashboard. It was a complete overhaul: totally new from the styles to the structure to the interactions.

And then… nothing happened. My designs began to collect dust in Adobe Illustrator. We started talking about this overhaul, or “Cleansweep” as we were calling it, in the hushed tones normally reserved for more taboo topics like “people who don’t recycle.”

Looking back, it makes sense. We had created a project that seemed technically insurmountable. When I’d glance over the chasm between the current and ideal designs, the leap seemed enormous. Where would we even begin? Finally, Jeff, who had apparently had enough of our shoegazing, got us back on track. He suggested we snap out of our paralysis by simply taking a step.

### Enter Operation Cleansweep, Phase One

Over several cups of coffee and lots of whiteboard marker, Jeff and I came up with a plan to build and implement Cleansweep in phases. It wasn’t too hard to identify and bucket changes that belonged in the same “phase”—style changes should happen all at once, but a re-organization could wait for later. Totally new functionality? That should be separately scoped and built on its own timeline, instead of holding up the show.

### Building in delight

One thing we refused to sacrifice in this phased approach was creating a delightful experience. Delight goes beyond adding Easter eggs to make people smile (although we do love that, too!). Creating a delightful experience means starting from the users’ perspective, and giving them exactly what they need intentionally and efficiently. It’s an intricate balance between creating expected interactions and surpassing expectations with pleasant surprises.

How do you make an account settings page more delightful? By speeding it up and reducing the amount of time it takes to complete a task.

Phase One introduces a new style paradigm that makes it easier to scan and find the settings you are looking for (because we hope you won’t have to change these settings often).

We added sidebar navigation to speed up clicking between the settings sections. An overview landing page allows you to easily see your most important account information at a glance. A greeting by name commends you for your video wins with some just-for-fun stats—as well as an exploration of how many adjectives we can apply to the word “videos” (hint: refresh your overview page!). And that is just the beginning.

### Moving forward

As a relatively new web designer, this whole phase-planning idea was a bit foreign to me. Bringing a pixel-perfect vision to life was what I was trained to do! But as my first-phase design began to fall into place, I realized why this phased approach was what building for the web was like. Having a web prototype to interact with exposed situations I hadn’t planned for, and it made it much easier to share my vision for things like interactions.

What was perfect yesterday will be in need of work by tomorrow. This dance of staggered refinement keeps us always moving forward, never stagnant. One step is more attainable than a giant leap, allowing for quick iteration and improvements between steps. Besides, rolling out smaller changes incrementally provides an easier transition for users, mitigating the risk of disorienting them. That’s a win-win in my book!

I’m pleased to present this first phase to all our customers. You’ll see that the settings you know and love are where they’ve always been, just in a slightly more intuitive layout and a more delightful look. We’ve made some fun changes under the hood, but I’ll let you discover them (or even better, be blissfully unaware of their positive impact!).

### What’s next?

Phase Two of Cleansweep aims to make the account section even more usable and delightful. We’re working on more intuitive organization, smoother interactions, better billing notifications, expanded API controls, celebrating your Wisti-anniversaries, and applying all of these new styles to the rest of the account section.

We’ve got a few more planks to lay down on this bridge, but we’re a whole step closer to a more usable and delightful dashboard for all.

**How do you approach projects that feel insurmountable at first? What changes would you like to see in your Wistia account dashboard?**

4 Reference Architectures To Optimize Your Ecommerce

September 10, 2014 12:00 pm

1 karma points

“46 percent of ecommerce retailers report difficulty managing their platform, keeping up with market demands and running their underlying infrastructure”*

Retailers know that the online store is a window to the world. And when it comes to ecommerce, the specialists at Rackspace have pretty much seen it all. As the No. 1 hosting provider for the Retailer™ Top 1,000*, we can help ensure that your online stores are always available, and running at peak performance. That’s why our Rackspace Digital application specialists put together four reference architectures that optimize the right mix of price, performance and control for most businesses to help you make the most out of your ecommerce platform.

Cost Effective
A cost effective configuration is ideal for small to medium business ecommerce platforms needing over 99 percent uptime while keeping costs at a minimum. It offers business-class features like monitoring, managed databases with backup and content delivery networks (CDN). It also includes future-proof features like cloud load balancing, so you can be prepared for unexpected events.

It relies on a two-tier cloud based architecture, in which the catalog display logic and the shopping cart logic are kept together in the same server. It’s also configured so that customer credit card information is stored in a third-party payment gateway rather than in a Rackspace data center, which is often less expensive when transaction volumes are relatively low.

This reference architecture was designed for small to medium businesses and mid-market customers who need scaling ability and have higher security. It’s ideal for organizations that need enterprise features like VLANs and API access, but also need the flexibility that the cloud provides.

With the Intermediate setup, you get 99.99 percent uptime that relies on a utility-based model. It’s a three-tier cloud-based architecture, meaning that the catalog display logic and shopping cart logic are kept in different servers. Credit card information is stored in third-party payment gateway, reducing costs.

This configuration is ideal for mid-market and enterprise customers that want to leverage cloud scaling but have specific compliance needs, legacy software or need the performance of a dedicated environment. A good example of this is a company that needs to run a high-performance database, but would like to connect it to a farm of elastic application servers that run in the cloud.

The Advanced setup offers 99.99 percent uptime, and is built as a three-tier hybrid system that includes both cloud and dedicated servers. The application tier for this architecture is built on cloud servers and it stores both the shopping cart and the checkout logic. And while customer credit card data is stored in third-party applications, transactional data is stored in dedicated servers.

This system is a step beyond the Advanced architecture, and is designed for organizations that need 99.999 percent uptime or better. In addition to a hybrid configuration with dedicated and cloud servers, this offers a high-redundancy solution that is run by Rackspace Critical Application Engineers.

Because of high uptime and security needs, this configuration stores customer credit card information and transaction data in dedicated servers, along with the application tier. The web tier that contains the product catalog logic is also built on dedicated servers, but can still scale by bursting into cloud servers when demand is high.

Ecommerce hosting backed by Fanatical Support®. Rackspace Digital provides specialized hosting for ecommerce and offers expertise to help you find the solution that best fits your needs. We’re dedicated to helping you succeed, so please feel free to reach out to our digital specialists if you have any questions.

* Source: Understanding TCO When Evaluating Ecommerce Platforms, 2012, Forrester Research.
* Source: Internet Retailer’s newsletter “Introducing the Top Vendors to the Top 1,000.”

1 karma points
Akamai Edge 2014: A Look at the Web Security Track

September 8, 2014 3:29 am


This time next month, I’ll be at the Akamai Edge customer conference. It’s a terrific opportunity to meet face-to-face with a lot of our customers and get their feedback on what’s working for them and what we can improve upon. A robust Web Security track of talks is planned, and I’ll be blogging about it.

Before that, I’ll be preparing some advance blog posts to give attendees a preview. This is the first such post — a glimpse at what’s on the schedule. Going forward, I’ll do previews of specific talks.

The security track will run each day of Edge. Here’s a partial list of what’s planned:

Wednesday, Oct. 8:

1:30-2 p.m.

  • Million Browser Botnet – Live Demonstration
  • DDoS Simulation LAB – How To Conduct a Live DDoS Simulation

2:20-3 p.m.

  • Incident Response Panel – From Theory to Reality
  • SSL LAB – What App Owners and Security Professionals Need To Do to Prepare for SSL Evolution

3:10-3:50 p.m.

  • The Evolution of SSL – Improving the Foundations of Internet Security, with Akamai PLXsert Manager David Fernandez
  • Security API LAB – Controlling Security at Akamai’s Edge

4-4:40 p.m.

  • The Growing Importance of Cybersecurity

Thursday, Oct. 9:

1:30-2:10 p.m.

  • Disruptive Trends in Security, with Securosis CTO Adrian Lane
  • SSL LAB – What App Owners and Security Professionals Need To Do to Prepare for SSL Evolution

2:20-3 p.m.

  • Security API LAB – Controling Security at Akamai’s Edge
  • Security Panel – Towards Security and Operations Harmony

3:10-3:50 p.m.

  • DDoS Simulation LAB – How To Conduct a Live DDoS Simulation
  • Using Client Reputation to Enhance Security, with Akamai Director of Threat Research Ory Segal

Friday, Oct. 10:

9-9:40 a.m.

  • SSL LAB – What App Owners and Security Professionals Need To Do to Prepare for SSL Evolution
  • Bypass Surgery – Akamai’s Heartbleed Response Case Study, with Akamai Chief Security Architect Brian Sniffen

9:50-10:30 a.m.

  • 2014 DDoS Threat Report, with Akamai PLXsert Manager David Fernandez

New Resources APIs for the AWS SDK for Java

August 28, 2014 1:20 pm


We are launching a preview of a new, resource-style API model for the AWS SDK for Java. I will summarize
the preview here, and refer you to the AWS Java Blog for full information!

The new resource-oriented APIs are designed to be easier to understand and simpler to use. It
obviates much of the request-response verbosity present in the existing model and presents a
view of AWS that is decidedly object-oriented. Instead of exposing all of the methods of the
service as part of a single class, the resource-style API includes multiple classes, each
of which represents a particular type of resource for the service. Each class includes the
methods needed to interact with the resource and with related resources of other types.
Code written to the new API will generally be shorter, cleaner, and easier to comprehend.

Here is the old-school way to retrieve an AWS Identity and Access Management (IAM) group using the
GetGroup function:

AmazonIdentityManagement iam = new AmazonIdentityManagementClient();

GetGroupRequest getGroupRequest = new GetGroupRequest("NeedNewKeys");
GetGroupResult getGroupResult = iam.getGroup(getGroupRequest);

And here is the new way:

IdentityManagement iam = ServiceBuilder.forService(IdentityManagement.class)

Group needNewKeys = iam.getGroup("NeedNewKeys");

The difference between the old and the new APIs becomes even more pronounced when more
complex operations are used. Compare the old-school code for marking an outdated
access key (oldKey) for an IAM user as inactive:

UpdateAccessKeyRequest updateAccessKeyRequest = new UpdateAccessKeyRequest()

With the new, streamlined code, the intent is a lot more obvious. There’s a lot less in the way
of setup code and the method is invoked on the object of interest instead of on the service:


The new API is being launched in preview mode with support for
Amazon Elastic Compute Cloud (EC2), AWS Identity and Access Management (IAM), and Amazon Glacier. We plan to introduce resource APIs for
other services and other AWS SDKs in the future.


PS – To learn more about Resource APIs, read the

full post on the AWS Java Development Blog

The agile side of colocation

August 28, 2014 8:43 am

Internap DataCenter. Agile Colocation

For companies running distributed applications at scale, colocation remains an essential piece of a high-performance infrastructure. While traditional colocation is often viewed as simply a physical location with power, cooling and networking functionality, today’s colocation services offer increased flexibility and control for your environment.

Let’s take a look at some real-world examples of companies that are using colocation as a core element of their infrastructure to run a distributed app at scale.

Outbrain is the leading content discovery platform on the web, helping companies grow their audience and increase reader engagement through an online content recommendations engine. The company’s data centers are designed to be DR-ready, and operate in active-active mode so everything is always available when you need it.

Outbrain’s continuous deployment process involves pushing around 100 changes per day to their production environment, including code and configuration changes. This agile, controlled process demonstrates how a traditional solution like colocation can be flexible enough to support a truly distributed application at scale.

Watch how Outbrain drives content discovery and engagement.

eXelate is the smart data and technology company that powers smarter digital marketing decisions worldwide. As a real-time data provider, they need to operate as a distributed application to handle large amounts of consumer-generated traffic and transactions on their networks around the world. Their infrastructure has to support dynamic content and data in order to provide meaningful insights for consumers and marketers.

eXelate’s colocation environment includes unique hardware that is outside the realm of normal commodities. The ability to incorporate Fusion-io and data warehousing services like Netezza, as well as make CPU changes and RAM upgrades, helps eXelate support the high number of optimizations required by their application. The company also uses bare-metal cloud to spin up additional instances through the API as needed. This combination of colocation and cloud creates a best-fit infrastructure for eXelate’s data-intensive application.

Watch how eXelate powers smarter digital marketing decisions.

Whether your organization runs a continuous deployment or requires the ability to process real-time data, colocation provides the flexibility to create a best-fit infrastructure. State-of-the art colocation facilities support a hybrid approach, allowing you to combine colocation and cloud in the manner that best meets the requirements of distributed apps at scale.

Get the white paper: Next-Generation Colocation Drives Operational Efficiencies

The post The agile side of colocation appeared first on Internap.

Amazon Zocalo – Now Generally Available

August 27, 2014 8:37 am


Amazon Zocalo has been available in a Limited Preview since early July
(see my blog post,
Amazon Zocalo –
Document Storage and Sharing for the Enterprise
to learn more). During the
Limited Preview, many AWS users expressed interest in evaluating Zocalo and were
admitted in to the Preview on a space-available basis.

Today we are making Amazon Zocalo generally available to all
AWS customers. You can sign up today and start using
Zocalo now. There’s a 30-day free trial (200 GB of storage per user for up to 50 users); after
that you pay $5 per user per month
(see the Zocalo Pricing page for more information).

As part of this move to general availability, we are also announcing
that AWS CloudTrail now records calls made to the Zocalo API. This
API is currently internal, but we plan to expose it in the
future. If you are interested in building applications that work
with the Zocalo API, please express your interest by emailing us
at We
are very interested in learning more about the kinds of applications
that you are thinking about building.

I have become a regular user of Zocalo, and also a big fan! I generally have between
5 and 10 blog post drafts under way at any given time. I write the first draft, upload
it to Zocalo, and share it with the Product Manager for initial review. We iterate on the
early drafts to smooth out any kinks, and then share it with a wider audience for
final review. When multiple reviewers provide feedback on the same document, Zocalo’s
Feedback tab lets me scan, summarize, and respond to the feedback quickly and


Video.js v4.7.0 – Built mostly by NEW contributors! Also Google chooses Video.js

August 6, 2014 11:42 am


We’re continuing to work hard on improving the contributor experience around the Video.js project and it’s paying off. Over half of the changelog is thanks to brand new contributors! Issues and pull requests are getting addressed faster than ever, and I was even allowed to give a talk at OSCON on some of the strategies we’re using. If you’re instersted in getting involved, join the #videojs IRC room or post an issue to let us know.

Google Chooses Video.js for Google Media Framework

Google recently announced a new framework for building video experiences and monetization. There are versions of the framework for native iOS and Android apps, and for the browser they chose to use Video.js. Check out their video.js plugin, and as it says in their announcement, “Stay tuned as well for a deeper dive into Video.js with IMA soon!”


In this release we’ve built the infrastructure for displaying text in other languages. Examples of text include error messages and text used for accessibility. This feature can extend to plugins as well.

Today you can include other languages by including the JSON translations object from the language you want with the player, like in this example for Spanish (es).

videojs.options.languages['es'] = { [translations object] }

You can find translations files in the lang folder of the project. We don’t have many translations yet, but we’re looking for translators if you’d like to help!

Multiple buffered regions

With HTML5 video you can skip ahead in the video and the browser will start downloading the part of the file needed for the new position, which is different from how Flash video works by default. Flash will download from the start to the end of the file so you can only skip ahead once it has download that part of the video.

In the HTML5 video API we’re given the buffered property which returns a list of time ranges that the browser has downloaded data for. Early on in HTML5 video, browsers only ever reported one time range, but now we have a direct view of what’s been downloaded.

In the newest version of the video.js skin you can see the specific regions.

We’ve kept it subtle so it’s not too big of a change. We’d love to hear your thoughts on it.

DASH Everywhere-ish

If you haven’t seen it yet, check out the post on Tom Johson’s work getting DASH supported in Video.js, using Flash or the new Media Source Extensions. MPEG-DASH is an adaptive streaming format that Netflix and YouTube are using to stream video to cutting-edge browsers. It has the potential to replace Apple’s HTTP Live Streaming format as the main format used for adaptive streaming.

Video.js on Conan!

Conan O’Brien’s TeamCoco site is using Video.js with a nicely customized skin and ads integration. Check it out!

New Skin by Cabin

The team at Cabin put together a simple and clean new skin for video.js.

New Plugins

A lot of great new plugins have been released!

  • videojs-ima: Easily integrate the Google IMA SDK into Video.js to enable advertising on your video content.
  • videojs-brightcoveAnyaltics: Allow tracking of views/impressions & engagement data in videojs for Brightcove videos
  • videojs-logobrand: Add a logo/brand image to the player that appears/disappears with the controls. (also useful as a basic plugin template for learning how Video.JS plugins work.)
  • videojs-seek: Seeks to a specific time point specified by a query string parameter.
  • videojs-preroll: Simple preroll plugin that displays an advertisement before the main video
  • videojs-framebyframe: Adds buttons for stepping through a video frame by frame
  • videojs-loopbutton: Adds a loop button to the player
  • videojs-ABdm: Use CommentCoreLibrary to show comments (which is called as DanMu) during playing.
  • videojs-hotkeys: A plugin for Video.js that enables keyboard hotkeys when the player has focus.

New Release Schedule

As part of improving the contributor experience we’re moving to scheduled releases. We’ll now put out a release every other Tuesday as long as there’s new changes to release. This will help give everyone a better idea of when specific features and fixes will become available.

Full list from the change log

  • Added cross-browser isArray for cross-frame support. fixes #1195 (view)
  • Fixed support for webvtt chapters. Fixes #676. (view)
  • Fixed issues around webvtt cue time parsing. Fixed #877, fixed #183. (view)
  • Fixed an IE11 issue where clicking on the video wouldn’t show the controls (view)
  • Added a composer.json for PHP packages (view)
  • Exposed the vertical option for slider controls (view)
  • Fixed an error when disposing a tech using manual timeupdates (view)
  • Exported missing Player API methods (remainingTime, supportsFullScreen, enterFullWindow, exitFullWindow, preload) (view)
  • Added a base for running saucelabs tests from grunt (view)
  • Added additional browsers for saucelabs testing (view)
  • Added support for listening to multiple events through a types array (view)
  • Exported the vertical option for the volume slider (view)
  • Fixed Component trigger function arguments and docs (view)
  • Now copying all attributes from the original video tag to the generated video element (view)
  • Added files to be ignored in the bower.json (view)
  • Fixed an error that could happen if Flash was diposed before the ready callback was fired (view)
  • The up and down arrows can now be used to control sliders in addition to left and right (view)
  • Added a player.currentType() function to get the MIME type of the current source (view)
  • Fixed a potential conflict with other event listener shims (view)
  • Added support for multiple time ranges in the load progress bar (view)
  • Added vjs-waiting and vjs-seeking css classnames and updated the spinner to use them (view)
  • Now restoring the original video tag attributes on a tech change to support webkit-playsinline (view)
  • Fixed an issue where the user was unable to scroll/zoom page if touching the video (view)
  • Added “sliding” class for when slider is sliding to help with handle styling (view)


Discuss on Twitter | Discuss on Hacker News

Now Shipping: CDN Raw Logs to S3 export

August 5, 2014 12:02 am


Although our real time analytics API make it extremely easy to process and get the data you need, sometimes you need to parse the data yourself.

Meet our newest feature of MaxCDN Insights that allows you to save all access logs in the format you want to your own S3 bucket.

Every single response from our servers for your pull zone is now accessible to you to do as you please. You can parse the raw data and build your statistics and analytics dashboards or parse it every day and extract very specific information for your own custom needs.

To use it you will need an existing AWS profile and an S3 bucket.

Once you have that you can enable raw logs to s3 export in your settings in our Control Panel


You also have the option to select the frequency of log exports into your bucket.

1 hour, 12 hours, 1 day and 3 days are available.

Before enabling it you need to also set a format string. Its a very powerful feature that can help you a lot by saving you space on your AWS account and simplify the parsing of your logs.

We recommend to store only the information you need avoiding using all available variables.


The post Now Shipping: CDN Raw Logs to S3 export appeared first on MaxCDN Blog.