Posts tagged ‘amazon’

Amazon’s shipping new Fire TV Stick, but new orders won’t arrive until 2015

November 25, 2014 12:28 pm


Just starting your holiday shopping? If so, you might want to check out Amazon’s Fire TV Stick, which the e-tailer is calling its fastest-selling Amazon device ever. You...

Continue Reading

New APN Competencies – Storage and Life Sciences

October 6, 2014 2:37 pm


When I talk to enterprises and mid-sized companies about their plans to gain
agility and cost-effectiveness by their workloads and applications to the Cloud,
they are eager to move forward and often want to seek the assistance of
solution providers with AWS expertise. We created the AWS Partner Network (APN) a couple of
years ago to provide business, technical, marketing, GTM (go to market)
support for companies that find themselves in this situation.

With the continued growth and speciation of AWS, along with the increasing
diversity of AWS use cases,
we have begun to recognize partners that have demonstrated their competence
in specialized solution area such as
Big Data,
Managed Services,
SAP, and

New Storage and Life Sciences Competencies
We recently launched new APN Competencies for Storage and Life Sciences.

Storage Competency Partners
can help you to evaluate and use techniques and technologies that
allow you to effectively store data in the AWS Cloud. You can
search for APN Storage Partners based on primary use case such as Backup,
Archive, Disaster Recovery, and Primary File Storage / Cloud NAS. Our
initial Storage Competency Partners are
Avere Systems,
CommVault Systems,
CTERA Networks,
Zadara Storage.

Sciences Competency Partners
can help you conduct drug
discovery, manage clinical trials, initiate manufacturing and distribution
activities, and do R&D of genetic-based treatments and
companion diagnostics. Our initial Life Science Competency Partners includes
both Independent Software Vendors (ISVs) and System Integrators (SIs).
The initial ISV partners are
Seven Bridges Genomics,
Cycle Computing,
Medidata Solutions Worldwide,
Core Informatics, and

The initial SI partners are
Booz Allen Hamilton,
G2 Technology Group,
HCL Technologies,
Infosys, and

Applying for Additional Competencies
If your organization is already an ISV or SI member of the Amazon Partner Network and are interested in
gaining these new competencies, log in to the APN Portal, examine your
scorecard, and click on Apply for APN Competencies to get started. You will, of course,
need to share some of your customer successes and demonstrate your technical readiness!


Amazon Elastic Transcoder Now Supports Smooth Streaming

October 1, 2014 1:50 pm


Amazon Elastic
is a scalable, fully managed media (audio and video)
transcoding service that works on a cost-effective, pay-per-use
model. You don’t have to license or install any software, and you
can take advantage of transcoding presets for a variety of popular
output devices and formats, including H.264 and VP8 video and AAC,
MP3, and Vorbis audio formatted into MP4, WebM, MPEG2-TS, MP3, and
OGG packages. You can also generate segmented video files (and the
accompanying manifests) for HLS video streaming.

Earlier this year we improved Elastic Transcoder with an increase in the
level of parallelism and
support for captions.

Today we are adding support for
Smooth Streaming (one of
several types of
adaptive bitrate streaming)
over HTTP to platforms such as XBox, Windows Phone, and clients that make use of
Microsoft Silverlight players.
This technology improves the viewer experience by automatically switching to data streams of
higher and lower quality that are based on local network conditions and CPU utilization on the playback
device. In conjunction with Amazon CloudFront, you can now distribute high-quality audio and video
content to even more types of devices.

New Smooth Streaming Support
Adaptive bitrate streaming uses the information stored in
a manifest file to choose between alternate
renditions (at different bitrates) of the same content. Although
the specifics will vary, you can think of this as low, medium, and high
quality versions of the same source material. The content
is further segmented into blocks, each containing several seconds
(typically two to ten) of encoded content. For more information about
the adaptation process, you can read my recent blog post,

Amazon CloudFront Now Supports Microsoft Smooth Streaming

Each transcoding job that specifies Smooth Streaming as an output format generates
three or more files:

  • ISM — A manifest file that contains links to each rendition along
    with additional metadata.
  • ISMC — A client file that contains information about each rendition and
    each segment within each rendition.
  • ISMV — One or more movie
    files (sometimes known as Fragmented MP4).

The following diagram shows the relationship between the files:

Getting Started
If you are familiar with Elastic Transcoder and already have your
pipelines set up, you can choose the Smooth playlist format during the job
creation process. For more information, see
Creating a Job in Elastic Transcoder.
If you are new to Elastic Transcoder, see
Getting Started with Elastic Transcoder.

After you create an Elastic Transcoder job that produces the files that are needed for
Smooth Streaming, Elastic Transcoder will place the files in the designated Amazon Simple Storage Service (S3) bucket. You can use the
Smooth Streaming support built in to Amazon CloudFront (this is the simplest and best
option) or you can set up and run your own streaming server.

If you embed your video player in a web site that is hosted on a different domain from
the one that you use to host your files, you will need to create a
clientaccesspolicy.xml or
crossdomainpolicy.xml file, set it up to allow the appropriate level
of cross-domain access, and make it available at the root of your CloudFront distribution.
For more information about this process, see
On-Demand Smooth Streaming
. For more information about configuring Microsoft
Silverlight for cross-domain access, see
Making a Service
Available Across Domain Boundaries

Get a Smooth Start with Smooth Streaming
This powerful new Elastic Transcoder feature is available now and you can start using it today!


AT&T targets cord nevers with Amazon Prime, HBO, basic TV & broadband for $40

September 22, 2014 2:15 pm


AT&T is making a play for cord nevers – those Millennials who are used to getting their video entertainment online sans a cable subscription – with a package of basic U-verse TV, broadband, HBO and Amazon Prime, all for $40.

The package includes local TV stations only, a non-DVR receiver, up to 45 Mbps Internet service, HBO on Demand, HBO Go and Amazon Prime.

The limited offer isn’t available everywhere and the price is good for only one year, after which the price pops up to better than $70, not including the $99 annual tab for Amazon Prime. It also has an early-termination fee of up to $180 and $99 installation.

DSL Reports points out that the latest deal is an iteration of other low-cost packages aimed at attracting non-subscribers, but notes that it’s the first time U-verse has included the Amazon Prime Instant Video piece.

AT&T isn’t alone in offering an attractively priced entry-level video package; Comcast, Time Warner Cable, Verizon and other operators are struggling to find the right formula to attract Millennials and to keep their services, especially broadband, in homes they already have as customers.

But, almost all of those packages have an expiration date, a point at which the deals become far less alluring.

Does Dish have a better idea? Maybe

Dish Network is taking a different tack.

An executive I spoke with at IBC earlier this month said that the satellite operator’s rumored $30 basic service is getting ready to launch and asserted that the deal isn’t being positioned as an introduction to a more expensive package; the $30 price for the package will stay at $30.

That doesn’t mean that Dish won’t look to increase revenues by up-selling to more expensive deals, but that isn’t the focus, he said.

Instead of introductory pricing that evaporates – and creates churn – Dish will look to sell additional packages of content in small bites that generate additional revenue.

Dish, in offering a low-priced option that it intends to “keep cheap,” is taking a stab at breaking the relentless cycle of subscriber defections to the “next best deal” available from the competition.

Subscriber acquisitions costs (SAC) – the upfront costs of getting new subscribers – has been a huge thorn in the sides of operators, who constantly fight to reduce churn.

Dish may have found a way to reduce that churn. The question, of course, is whether they can sell a low-cost package that makes sense financially.

Follow me on Twitter @JimONeillMedia

Search and Interact With Your Streaming Data Using the Kinesis Connector to Elasticsearch

September 11, 2014 11:08 am


My colleague
Rahul Patil
wrote a guest post to show you how to build an application that loads streaming data
from Kinesis into an Elasticsearch cluster in real-time.


The Amazon Kinesis team is excited to release the Kinesis connector to Elasticsearch!
Using the connector, developers can easily write an application that loads streaming data from Kinesis into an
Elasticsearch cluster in real-time and reliably at scale.

Elasticsearch is an open-source search and analytics engine. It
indexes structured and unstructured data in real-time.
Kibana is
Elasticsearch’s data visualization engine; it is used by dev-ops and
business analysts to setup interactive dashboards. Data in an
Elasticsearch cluster can also be accessed programmatically using
RESTful API or application SDKs. You can use the CloudFormation
template in our
sample to quickly create an
Elasticsearch cluster on Amazon Elastic Compute Cloud (EC2), fully managed by Auto Scaling.

Wiring Kinesis, Elasticsearch, and Kibana
Here’s a block diagram to help you see how the pieces fit together:

Using the new Kinesis Connector to Elasticsearch, you author an
application to consume data from Kinesis Stream and index the data
into an Elasticsearch cluster. You can transform, filter, and buffer
records before emitting them to Elasticsearch. You can also finely
tune Elasticsearch specific indexing operations to add fields like
time to live, version number,
type, and id on a per record
basis. The flow of records is as illustrated in the diagram below.

Note that you can also run the entire connector pipeline from within your Elasticsearch
cluster using River.

Getting Started
Your code has the following duties:

  1. Set application specific configurations.
  2. Create and configure a KinesisConnectorPipeline with a Transformer, a Filter, a Buffer, and an Emitter.
  3. Create a KinesisConnectorExecutor that runs the pipeline continuously.

All the above components come with a default implementation, which can easily be
replaced with your custom logic.

Configure the Connector Properties
The sample comes with a .properties file and a configurator. There are many settings and you can leave most
of them set to their default values. For example, the following settings will:

  1. Configure the connector to bulk load data into Elasticsearch only after you’ve
    collect at least 1000 records.
  2. Use the local Elasticsearch cluster endpoint for testing.
bufferRecordCountLimit = 1000
elasticSearchEndpoint = localhost

Implementing Pipeline Components
In order to wire the Transformer, Filter, Buffer, and Emitter, your
code must implement the IKinesisConnectorPipeline interface.

public class ElasticSearchPipeline implements

public IEmitter<ElasticSearchObject> getEmitter
    (KinesisConnectorConfiguration configuration) {
    return new ElasticSearchEmitter(configuration);

public IBuffer<String> getBuffer(
    KinesisConnectorConfiguration configuration) {
    return new BasicMemoryBuffer<String>(configuration);

public ITransformerBase <String, ElasticSearchObject> getTransformer 
    (KinesisConnectorConfiguration configuration) {
    return new StringToElasticSearchTransformer();

public IFilter<String> getFilter
    (KinesisConnectorConfiguration configuration) {
    return new AllPassFilter<String>();

The following snippet implements the abstract factory method, indicating the pipeline you wish to use:

public KinesisConnectorRecordProcessorFactory<String,ElasticSearchObject> 
    getKinesisConnectorRecordProcessorFactory() {
         return new KinesisConnectorRecordProcessorFactory<String, 
             ElasticSearchObject>(new ElasticSearchPipeline(), config);

Defining an Executor
The following snippet defines a pipeline where the incoming Kinesis records are strings and outgoing records are an

public class ElasticSearchExecutor extends 

The following snippet implements the main method, creates the Executor and starts running it:

public static void main(String[] args) {
    KinesisConnectorExecutor<String, ElasticSearchObject> executor 
        = new ElasticSearchExecutor(configFile);;

From here, make sure your
AWS Credentials are provided correctly. Setup the project dependencies using
ant setup. To run the app, use ant run and watch it go!
All of the code is on GitHub, so you can get
started immediately. Please post your questions and suggestions on the
Kinesis Forum.

Kinesis Client Library and Kinesis Connector Library

When we
launched Kinesis
in November of 2013, we also introduced the
Kinesis Client Library.

You can use the client library to build applications that
process streaming data. It will handle complex issues such as
load-balancing of streaming data, coordination of distributed
services, while adapting to changes in stream volume, all in a
fault-tolerant manner.

We know that many developers want to consume and process incoming
streams using a variety of other AWS and non-AWS services. In order
to meet this need, we released the
Kinesis Connector Library late
last year with support for Amazon DynamodB, Amazon Redshift, and
Amazon Simple Storage Service (S3). We then followed up that with a
Kinesis Storm Spout
EMR connector

earlier this year. Today we are expanding the
Kinesis Connector Library with support for Elasticsearch.

— Rahul

Influxis vs Amazon Web Services (AWS)

September 10, 2014 10:44 am


Have you ever wondered how Influxis services compare to Amazon Web Services (AWS)? So did we – and we took the challenge. We compare services, hardware, speeds, pricing and more. Results? Check out this below comparison and see the difference … Continued

Kick-Start Your Cloud Storage Project With the Riverbed SteelStore Gateway

September 9, 2014 8:39 am


Many AWS customers begin their journey to the cloud by implementing a
backup and recovery discipline.
Because the cloud can provide any desired amount of durable storage that is
both secured and cost-effective, organizations of all shapes and sizes
are using it to support robust backup and recovery models that eliminate the need
for on-premises infrastructure.

Our friends at Riverbed have launched an
for AWS customers. This promotion is designed to help
qualified enterprise, mid-market, and SMB customers in North America
to kick-start their cloud-storage projects by applying for up to 8
TB of free Amazon Simple Storage Service (S3) usage for six months.

If you qualify for the promotion, you will be invited to download the Riverbed
software appliance (you will also receive enough AWS credits to
allow you to store 8 TB of data per month for six months). With advanced
compression, deduplication, network acceleration and encryption
features, SteelStore will provide you with enterprise-class levels
of performance, availability, data security, and data
durability. All data is encrypted using AES-256 before leaving your
premises; this gives you protection in transit and at
rest. SteelStore intelligently caches up to 2 TB of recent backups
locally for rapid restoration.

The SteelStore appliance is easy to implement! You can be up and running
in a matter of minutes with the implementation guide, getting started guide, and user guide
that you will receive as part of your download. The appliance is compatible with
over 85% of the backup products on the market, including solutions from
CA, CommVault, Dell, EMC, HP, IBM, Symantec, and Veeam.

To learn more or to apply for this exclusive promotion,
click here!


Use AWS OpsWorks & Ruby to Build and Scale Simple Workflow Applications

September 8, 2014 1:28 pm


From time to time, one of my blog posts will describe a way to make use of two AWS products or services
together. Today I am going to go one better and show you how to bring the following trio of items in to play

All Together Now
With today’s launch, it is now even easier for you to build, host, and scale SWF applications in
Ruby. A new, dedicated layer in OpsWorks simplifies the deployment of workflows and activities written
in the AWS Flow Framework for Ruby. By combining AWS OpsWorks and SWF, you can easily set up a
worker fleet that runs in the cloud, scales automatically, and makes use of advanced Amazon Elastic Compute Cloud (EC2) features.

This new layer is accessible from the AWS Management Console. As part of this launch, we are also releasing
a new command-line utility called the runner. You can use this utility to test
your workflow locally before pushing it to the cloud. The runner uses information provided in a
new, JSON-based configuration file to register workflow and activity types, and
start the workers.

Console Support
A Ruby Flow layer can be added to any OpsWorks stack that is running version 11.10 (or newer)
of Chef. Simple add a new layer by choosing AWS Flow (Ruby) from the menu:

You can customize the layer if necessary (the defaults will work fine for most applications):

The layer will be created immediately and will include four Chef recipes that are specific
to Ruby Flow (the recipes are available on

The Runner
As part of today’s release we are including a new command-line utility,
aws-flow-ruby, also known as the runner. This utility
is used by AWS OpsWorks to run your workflow code. You can also use it to test your
SWF applications locally before you push them to the cloud.

The runner is configured using a JSON file that looks like this:

  "domains": [{
      "name": "BookingSample",
      "retention_in_days": 10

  "workflow_workers": [{
     "number_of_workers": 1,
     "domain": "BookingSample",
     "task_list": "workflow_tasklist"
  "activity_workers": [{
    "number_of_workers": 1,
    "domain": "BookingSample",
    "number_of_forks_per_worker": 5,
    "task_list": "activity_tasklist"

Go With the Flow
The new Ruby Flow layer type is available now and you can start using it today. To learn more
about it, take a look at the new OpsWorks section of the
AWS Flow Framework for Ruby User Guide.


UK, Germany get Amazon Fire TV a bit earlier than expected

September 4, 2014 6:05 am

Amazon Fire TV is launching in the UK and Germany

Amazon’s Fire TV box is now available for pre-order in the United Kingdom and Germany with delivery scheduled to begin next month.

Consumers who order today in Germany will be able to get their hands on the digital streaming device starting Sept. 25; for those in the U.K., delivery won’t start until Oct. 23.

The box, about the size of a deck of cards, allows users to access Amazon Prime Instant Video, as well as a range of other Internet video services, will cost 99 euros in Germany and £79 in the U.K., where it also will be available at retailers Argos, Dixons, Sainsbury’s and Tesco. Existing Amazon prime members in both countries can get the box for 49 euros and £49 for the next five days.

The early release date in Germany likely is a response to Netflix’s planned rollout there later this month.

Amazon already is in Germany with Amazon Instant Video and Prime Instant Video and has announced several new content deals there in an effort to blunt Netflix’s roll out.

Amazon Fire TV in Germany also includes access to catch-up and on-demand services from ZDF, ARD, Sport 1, Bild, Spiegel TV, Zattoo, Arte, Servus TV, and more. International content partners include Dailymotion, Vevo, Bloomberg, MUBI, Red Bull and others.

Unlike the U.K., German Fire TV users won’t – at the moment – have access through the device to Netflix, but that’s likely to change once Netflix is deployed.

In the U.K., Fire TV will support Amazon Instant Video, Prime Instant Video, and a range of other services including Netflix, YouTube, Demand 5, Sky News, Twitch, Spotify, Vevo and several others. It doesn’t currently list the BBC’s iPlayer as one of the services available; Amazon says more content services will be coming soon, including Demand 5, Curzon Home Cinema, STV Player, and more.

The Fire TV box launched in the U.S. market earlier this year into a crowded field that included devices from Apple, Google and Roku, among others.

Like Apple and Google, Amazon is offering an inexpensive device that gives consumers access to an near-endless array of content; a new take on the classic razor-razor blade business model.

But this model also includes a wild card: Netflix.

Follow me on Twitter @JimONeillMedia

MySQL Cache Warming for Amazon RDS

September 3, 2014 3:19 pm


Among many other responsibilities, a relational database system must make efficient
use of main memory (RAM) for buffering and caching purposes. RAM is far faster and
easier to access than SSD or magnetic storage; a properly sized and tuned cache
or buffer pool can do wonders for database performance.

Today we are improving
Amazon RDS for MySQL with support
for InnoDB cache warming. When an Amazon RDS DB instance that is running MySQL is
shut down, it can now be configured to save the state of its buffer pool, for
later reloading when the instance starts up. The instance will be ready to
handle common queries in an efficient fashion, without the need for any
warm up.

This feature is supported for RDS DB instances that are running version
5.6 (or later) of MySQL. To enable it, simply set the
innodb_buffer_pool_dump_at_shutdown and
innodb_buffer_pool_load_at_startup parameters to 1 in the
parameter group for your DB instance.

Users of MySQL version 5.6.19 and later can manage the buffer pool using the
mysql.rds_innodb_buffer_pool_load_now, and
mysql.rds_innodb_buffer_pool_load_abort stored procedures.

Once enabled, the buffer pool will be saved as part of a normal
(orderly) shutdown of the DB instance. It will not be saved if the
instance does not shut down normally, such as during a failover. In
this case, MySQL will load whatever buffer pool is available when
the instance is restarted. This is harmless, but possibly less
efficient. Applications can call
procedure on a periodic basis if this potential innefficiency is a
cause for concern.

DB Instances launched or last rebooted before August 14, 2014, will
need to be rebooted to gain access to this new feature. However, no
action is required for DB Instances launched or rebooted on or after
August 14, 2014. To learn more, take a look at
InnoDB Cache Warming
in the Amazon RDS User Guide.


Query Your EC2 Instances Using Tag and Attribute Filtering

September 2, 2014 9:05 am


As an Amazon Elastic Compute Cloud (EC2) user, you probably know just how simple and easy it is to launch EC2 instances
on an as-needed basis. Perhaps you got your start by manually launching an instance or two,
and later moved to a model where you launch instances through a AWS CloudFormation template,
Auto Scaling, or in Spot form.

Today we are launching an important new feature for the AWS Management Console. You can now find the instance or instances
that you are looking for by filtering on tags and attributes, with some advanced options including
inverse search, partial search, and regular expressions.

Instance Tags

Regardless of the manner in which you launch them, you probably want to track the role (development,
test, production, and so forth) internal owner, and other attributes of each instance. This
becomes especially important as your fleet grows to hundreds or thousands of instances. We have
long supported tagging of EC2 instances (and other resources) for many years. As you
probably know already, you can add up to ten tags (name/value pairs) to many types of AWS resources.
While I can sort by the tags to group like-tagged instances together, there’s clearly room to do
even better! With today’s launch, you can use the tags that you assign, along with the
instance attributes, to locate the instance or instances that you are looking for.

Query With Tags & Attributes
As I was writing this post, I launched ten EC2 instances,
added Mode and Owner tags to each
one (supplementing the default Name, and then
configured the console to show the tags and their values:

The new filter box offers many options. I’ll do my best to show them all to you!

In the examples that follow, I will filter my instances using the tags that
I assigned to the instances. I’ll start with simple examples and work up to some more complex ones.
I can filter by keyword. Let’s say that I am looking for an instance and can only recall part of the
instance id (this turns out to be a very popular way to search). I enter the partial id (“2a27”) in to the filter box and press Enter to find it:

Let’s say that I want to find all of the instances where I am listed as Owner. I click in the Filter box
for some guidance:

I select the Owner tag and select from among the values presented to me:

Here are the results:

I can add a second filter if I want to see only the instances where I am the owner and the Mode
is “Production”:

I can also filter by any of the attributes of the instance. For example, I can easily find all of the
instances that are in the Stopped state:

And I can, of course, combine this with a filter on a tag. I can find all of my stopped instances:

I can use an inverse search to find everyone else’s stopped instances (I simply prefix the value with an exclamation mark):

I can also use regular expressions to find instances owned by Kelly or Andy:

And I can do partial matches to compensate for inconsistent naming:

I can even filter by launch date to find instances that are newer or older than a particular

Finally, the filter information is represented in the console URL so that you can bookmark your
filters or share them with your colleagues:

Filter Now
This feature is available now and you can start using it today. It works for EC2 instances now; we
expect to make it available for other types of EC2 resources before too long.


AWS Week in Review – August 25, 2014

September 2, 2014 8:21 am


Let’s take a quick look at what happened in AWS-land last week:

Monday, August 25
Tuesday, August 26
Wednesday, August 27
Thursday, August 28
Friday, August 29

Stay tuned for next week! In the meantime,
follow me on
subscribe to the RSS feed.


New Resources APIs for the AWS SDK for Java

August 28, 2014 1:20 pm


We are launching a preview of a new, resource-style API model for the AWS SDK for Java. I will summarize
the preview here, and refer you to the AWS Java Blog for full information!

The new resource-oriented APIs are designed to be easier to understand and simpler to use. It
obviates much of the request-response verbosity present in the existing model and presents a
view of AWS that is decidedly object-oriented. Instead of exposing all of the methods of the
service as part of a single class, the resource-style API includes multiple classes, each
of which represents a particular type of resource for the service. Each class includes the
methods needed to interact with the resource and with related resources of other types.
Code written to the new API will generally be shorter, cleaner, and easier to comprehend.

Here is the old-school way to retrieve an AWS Identity and Access Management (IAM) group using the
GetGroup function:

AmazonIdentityManagement iam = new AmazonIdentityManagementClient();

GetGroupRequest getGroupRequest = new GetGroupRequest("NeedNewKeys");
GetGroupResult getGroupResult = iam.getGroup(getGroupRequest);

And here is the new way:

IdentityManagement iam = ServiceBuilder.forService(IdentityManagement.class)

Group needNewKeys = iam.getGroup("NeedNewKeys");

The difference between the old and the new APIs becomes even more pronounced when more
complex operations are used. Compare the old-school code for marking an outdated
access key (oldKey) for an IAM user as inactive:

UpdateAccessKeyRequest updateAccessKeyRequest = new UpdateAccessKeyRequest()

With the new, streamlined code, the intent is a lot more obvious. There’s a lot less in the way
of setup code and the method is invoked on the object of interest instead of on the service:


The new API is being launched in preview mode with support for
Amazon Elastic Compute Cloud (EC2), AWS Identity and Access Management (IAM), and Amazon Glacier. We plan to introduce resource APIs for
other services and other AWS SDKs in the future.


PS – To learn more about Resource APIs, read the

full post on the AWS Java Development Blog

Amazon Zocalo – Now Generally Available

August 27, 2014 8:37 am


Amazon Zocalo has been available in a Limited Preview since early July
(see my blog post,
Amazon Zocalo –
Document Storage and Sharing for the Enterprise
to learn more). During the
Limited Preview, many AWS users expressed interest in evaluating Zocalo and were
admitted in to the Preview on a space-available basis.

Today we are making Amazon Zocalo generally available to all
AWS customers. You can sign up today and start using
Zocalo now. There’s a 30-day free trial (200 GB of storage per user for up to 50 users); after
that you pay $5 per user per month
(see the Zocalo Pricing page for more information).

As part of this move to general availability, we are also announcing
that AWS CloudTrail now records calls made to the Zocalo API. This
API is currently internal, but we plan to expose it in the
future. If you are interested in building applications that work
with the Zocalo API, please express your interest by emailing us
at We
are very interested in learning more about the kinds of applications
that you are thinking about building.

I have become a regular user of Zocalo, and also a big fan! I generally have between
5 and 10 blog post drafts under way at any given time. I write the first draft, upload
it to Zocalo, and share it with the Product Manager for initial review. We iterate on the
early drafts to smooth out any kinks, and then share it with a wider audience for
final review. When multiple reviewers provide feedback on the same document, Zocalo’s
Feedback tab lets me scan, summarize, and respond to the feedback quickly and


No status quo in clouds

August 4, 2014 12:08 pm


This article was authored by Jouko Ahvenainen, and was originally posted on

Amazon (Amazon Web Services, AWS) is the leading enterprise cloud provider. Amazon surprised the market with a larger than expected loss. AWS has had a big impact on the result. What does it mean that the leading cloud company cannot run a profitable business? … [visit site to read more]

Flappy Bird is back, but only on Amazon’s Fire TV

August 1, 2014 7:33 pm


After several months flying under the radar, Flappy Bird creator Dong Nguyen is back with his latest game: Flappy Birds Family. The game is, however, only available to download on Amazon’s Android App Store, and only works with Amazon’s Fire TV set-top box. In our tests, we couldn’t install the game on an Android phone. Otherwise, the new game is much like the old one, but includes local multiplayer and more obstacles.

In interviews and tweets, Nguyen often got cagey when people asked him about the millions of people worldwide suffering from Flappy Bird addiction. Perhaps with his latest title, he wanted to ensure that gamers only play in the safety of their homes. “Enjoy playing the game at home (not breaking your TV) with your…