Posts tagged ‘aws’

New APN Competencies – Storage and Life Sciences

October 6, 2014 2:37 pm

0

When I talk to enterprises and mid-sized companies about their plans to gain
agility and cost-effectiveness by their workloads and applications to the Cloud,
they are eager to move forward and often want to seek the assistance of
solution providers with AWS expertise. We created the AWS Partner Network (APN) a couple of
years ago to provide business, technical, marketing, GTM (go to market)
support for companies that find themselves in this situation.

With the continued growth and speciation of AWS, along with the increasing
diversity of AWS use cases,
we have begun to recognize partners that have demonstrated their competence
in specialized solution area such as
Big Data,
Managed Services,
Microsoft,
SAP, and
Oracle.

New Storage and Life Sciences Competencies
We recently launched new APN Competencies for Storage and Life Sciences.

The
Storage Competency Partners
can help you to evaluate and use techniques and technologies that
allow you to effectively store data in the AWS Cloud. You can
search for APN Storage Partners based on primary use case such as Backup,
Archive, Disaster Recovery, and Primary File Storage / Cloud NAS. Our
initial Storage Competency Partners are
Avere Systems,
CommVault Systems,
CTERA Networks,
Druva,
Panzura,
Riverbed,
and
Zadara Storage.

The
Life
Sciences Competency Partners
can help you conduct drug
discovery, manage clinical trials, initiate manufacturing and distribution
activities, and do R&D of genetic-based treatments and
companion diagnostics. Our initial Life Science Competency Partners includes
both Independent Software Vendors (ISVs) and System Integrators (SIs).
The initial ISV partners are
Seven Bridges Genomics,
DNAnexus,
Cycle Computing,
Medidata Solutions Worldwide,
Core Informatics, and
Syapse.

The initial SI partners are
BioTeam,
Booz Allen Hamilton,
Cognizant,
G2 Technology Group,
HCL Technologies,
Infosys, and
Wipro.

Applying for Additional Competencies
If your organization is already an ISV or SI member of the Amazon Partner Network and are interested in
gaining these new competencies, log in to the APN Portal, examine your
scorecard, and click on Apply for APN Competencies to get started. You will, of course,
need to share some of your customer successes and demonstrate your technical readiness!

Jeff;

0
Amazon Elastic Transcoder Now Supports Smooth Streaming

October 1, 2014 1:50 pm

0


Amazon Elastic
Transcoder
is a scalable, fully managed media (audio and video)
transcoding service that works on a cost-effective, pay-per-use
model. You don’t have to license or install any software, and you
can take advantage of transcoding presets for a variety of popular
output devices and formats, including H.264 and VP8 video and AAC,
MP3, and Vorbis audio formatted into MP4, WebM, MPEG2-TS, MP3, and
OGG packages. You can also generate segmented video files (and the
accompanying manifests) for HLS video streaming.

Earlier this year we improved Elastic Transcoder with an increase in the
level of parallelism and
support for captions.

Today we are adding support for
Smooth Streaming (one of
several types of
adaptive bitrate streaming)
over HTTP to platforms such as XBox, Windows Phone, and clients that make use of
Microsoft Silverlight players.
This technology improves the viewer experience by automatically switching to data streams of
higher and lower quality that are based on local network conditions and CPU utilization on the playback
device. In conjunction with Amazon CloudFront, you can now distribute high-quality audio and video
content to even more types of devices.

New Smooth Streaming Support
Adaptive bitrate streaming uses the information stored in
a manifest file to choose between alternate
renditions (at different bitrates) of the same content. Although
the specifics will vary, you can think of this as low, medium, and high
quality versions of the same source material. The content
is further segmented into blocks, each containing several seconds
(typically two to ten) of encoded content. For more information about
the adaptation process, you can read my recent blog post,

Amazon CloudFront Now Supports Microsoft Smooth Streaming
.

Each transcoding job that specifies Smooth Streaming as an output format generates
three or more files:

  • ISM — A manifest file that contains links to each rendition along
    with additional metadata.
  • ISMC — A client file that contains information about each rendition and
    each segment within each rendition.
  • ISMV — One or more movie
    (PIFF)
    files (sometimes known as Fragmented MP4).

The following diagram shows the relationship between the files:

Getting Started
If you are familiar with Elastic Transcoder and already have your
pipelines set up, you can choose the Smooth playlist format during the job
creation process. For more information, see
Creating a Job in Elastic Transcoder.
If you are new to Elastic Transcoder, see
Getting Started with Elastic Transcoder.

After you create an Elastic Transcoder job that produces the files that are needed for
Smooth Streaming, Elastic Transcoder will place the files in the designated Amazon Simple Storage Service (S3) bucket. You can use the
Smooth Streaming support built in to Amazon CloudFront (this is the simplest and best
option) or you can set up and run your own streaming server.

If you embed your video player in a web site that is hosted on a different domain from
the one that you use to host your files, you will need to create a
clientaccesspolicy.xml or
crossdomainpolicy.xml file, set it up to allow the appropriate level
of cross-domain access, and make it available at the root of your CloudFront distribution.
For more information about this process, see
Configuring
On-Demand Smooth Streaming
. For more information about configuring Microsoft
Silverlight for cross-domain access, see
Making a Service
Available Across Domain Boundaries
.

Get a Smooth Start with Smooth Streaming
This powerful new Elastic Transcoder feature is available now and you can start using it today!

Jeff;

0
The AWS Loft Will Return on October 1st

September 18, 2014 5:40 am

0

As I
promised
earlier this year
, the AWS Pop-up Loft is reopening on Wednesday,
October 1st in San Francisco with a full calendar of events designed
to help developers, architects, and entrepreneurs learn about and
make use of AWS.

Come to the AWS Loft and meet 1:1 with an AWS technical expert, learn
about AWS in detailed product sessions, and gain hands-on experience
through our instructor-led Technical Bootcamps and our self-paced
hands-on labs. Take a look at the
Schedule of Events
to learn more about what we have planned.

Hours and Location
The AWS Loft will be open Monday through Friday, 10 AM to 6 PM, with special evening events
that will run until 8 PM. It is located at 925 Market Street in San Francisco.

Special Events
We are also setting up a series of events with
AWS-powered startups and partners from the San Francisco area. The
list is still being finalized but already includes cool companies like
Runscope (Automated Testing for APIs and Backend Services),
NPM (Node Package Manager),
Circle CI (Continuous Integration and Deployment),
Librato (Metrics, Monitoring, and Alerts),
CoTap (Secure Mobile Messaging for Businesses), and
Heroku (Cloud Application Platform).

A Little Help From Our Friends
AWS and Intel share a passion for innovation, along with a track record of helping startups
to be successful. Intel will demonstrate the latest technologies at the AWS Loft, including
products that support the Internet of Things and the newest Xeon processors. They will also host several
talks.

The folks at Chef are also joining forces with the AWS Loft and will be bringing their
DevOps expertise to the AWS Loft through hosted sessions and a training
curriculum. You’ll be able to learn about the Chef product — an automation platform for deploying and configuring
IT infrastructure and applications in the data center and in the Cloud.

Watch This!
In order to get a taste for the variety of activities and the level of excitement you’ll find at the AWS Loft, watch this
short video:

Come Say Hello
I will be visiting and speaking at the AWS Loft in late October and hope to see and talk to you
there!

Jeff;

0
Search and Interact With Your Streaming Data Using the Kinesis Connector to Elasticsearch

September 11, 2014 11:08 am

0

My colleague
Rahul Patil
wrote a guest post to show you how to build an application that loads streaming data
from Kinesis into an Elasticsearch cluster in real-time.

Jeff;


The Amazon Kinesis team is excited to release the Kinesis connector to Elasticsearch!
Using the connector, developers can easily write an application that loads streaming data from Kinesis into an
Elasticsearch cluster in real-time and reliably at scale.

Elasticsearch is an open-source search and analytics engine. It
indexes structured and unstructured data in real-time.
Kibana is
Elasticsearch’s data visualization engine; it is used by dev-ops and
business analysts to setup interactive dashboards. Data in an
Elasticsearch cluster can also be accessed programmatically using
RESTful API or application SDKs. You can use the CloudFormation
template in our
sample to quickly create an
Elasticsearch cluster on Amazon Elastic Compute Cloud (EC2), fully managed by Auto Scaling.

Wiring Kinesis, Elasticsearch, and Kibana
Here’s a block diagram to help you see how the pieces fit together:

Using the new Kinesis Connector to Elasticsearch, you author an
application to consume data from Kinesis Stream and index the data
into an Elasticsearch cluster. You can transform, filter, and buffer
records before emitting them to Elasticsearch. You can also finely
tune Elasticsearch specific indexing operations to add fields like
time to live, version number,
type, and id on a per record
basis. The flow of records is as illustrated in the diagram below.

Note that you can also run the entire connector pipeline from within your Elasticsearch
cluster using River.

Getting Started
Your code has the following duties:

  1. Set application specific configurations.
  2. Create and configure a KinesisConnectorPipeline with a Transformer, a Filter, a Buffer, and an Emitter.
  3. Create a KinesisConnectorExecutor that runs the pipeline continuously.

All the above components come with a default implementation, which can easily be
replaced with your custom logic.

Configure the Connector Properties
The sample comes with a .properties file and a configurator. There are many settings and you can leave most
of them set to their default values. For example, the following settings will:

  1. Configure the connector to bulk load data into Elasticsearch only after you’ve
    collect at least 1000 records.
  2. Use the local Elasticsearch cluster endpoint for testing.
bufferRecordCountLimit = 1000
elasticSearchEndpoint = localhost

Implementing Pipeline Components
In order to wire the Transformer, Filter, Buffer, and Emitter, your
code must implement the IKinesisConnectorPipeline interface.

public class ElasticSearchPipeline implements
    IKinesisConnectorPipeline<String,ElasticSearchObject> 

public IEmitter<ElasticSearchObject> getEmitter
    (KinesisConnectorConfiguration configuration) {
    return new ElasticSearchEmitter(configuration);
}

public IBuffer<String> getBuffer(
    KinesisConnectorConfiguration configuration) {
    return new BasicMemoryBuffer<String>(configuration);
}

public ITransformerBase <String, ElasticSearchObject> getTransformer 
    (KinesisConnectorConfiguration configuration) {
    return new StringToElasticSearchTransformer();
}

public IFilter<String> getFilter
    (KinesisConnectorConfiguration configuration) {
    return new AllPassFilter<String>();
}

The following snippet implements the abstract factory method, indicating the pipeline you wish to use:

public KinesisConnectorRecordProcessorFactory<String,ElasticSearchObject> 
    getKinesisConnectorRecordProcessorFactory() {
         return new KinesisConnectorRecordProcessorFactory<String, 
             ElasticSearchObject>(new ElasticSearchPipeline(), config);
    }

Defining an Executor
The following snippet defines a pipeline where the incoming Kinesis records are strings and outgoing records are an
ElasticSearchObject:

public class ElasticSearchExecutor extends 
    KinesisConnectorExecutor<String,ElasticSearchObject>

The following snippet implements the main method, creates the Executor and starts running it:

public static void main(String[] args) {
    KinesisConnectorExecutor<String, ElasticSearchObject> executor 
        = new ElasticSearchExecutor(configFile);
    executor.run();
}


From here, make sure your
AWS Credentials are provided correctly. Setup the project dependencies using
ant setup. To run the app, use ant run and watch it go!
All of the code is on GitHub, so you can get
started immediately. Please post your questions and suggestions on the
Kinesis Forum.

Kinesis Client Library and Kinesis Connector Library

When we
launched Kinesis
in November of 2013, we also introduced the
Kinesis Client Library.

You can use the client library to build applications that
process streaming data. It will handle complex issues such as
load-balancing of streaming data, coordination of distributed
services, while adapting to changes in stream volume, all in a
fault-tolerant manner.

We know that many developers want to consume and process incoming
streams using a variety of other AWS and non-AWS services. In order
to meet this need, we released the
Kinesis Connector Library late
last year with support for Amazon DynamodB, Amazon Redshift, and
Amazon Simple Storage Service (S3). We then followed up that with a
Kinesis Storm Spout
and
Amazon
EMR connector

earlier this year. Today we are expanding the
Kinesis Connector Library with support for Elasticsearch.

— Rahul

0
Influxis vs Amazon Web Services (AWS)

September 10, 2014 10:44 am

0

Have you ever wondered how Influxis services compare to Amazon Web Services (AWS)? So did we – and we took the challenge. We compare services, hardware, speeds, pricing and more. Results? Check out this below comparison and see the difference … Continued

0
Kick-Start Your Cloud Storage Project With the Riverbed SteelStore Gateway

September 9, 2014 8:39 am

0


Many AWS customers begin their journey to the cloud by implementing a
backup and recovery discipline.
Because the cloud can provide any desired amount of durable storage that is
both secured and cost-effective, organizations of all shapes and sizes
are using it to support robust backup and recovery models that eliminate the need
for on-premises infrastructure.

Our friends at Riverbed have launched an
exclusive
promotion
for AWS customers. This promotion is designed to help
qualified enterprise, mid-market, and SMB customers in North America
to kick-start their cloud-storage projects by applying for up to 8
TB of free Amazon Simple Storage Service (S3) usage for six months.

If you qualify for the promotion, you will be invited to download the Riverbed
SteelStore™
software appliance (you will also receive enough AWS credits to
allow you to store 8 TB of data per month for six months). With advanced
compression, deduplication, network acceleration and encryption
features, SteelStore will provide you with enterprise-class levels
of performance, availability, data security, and data
durability. All data is encrypted using AES-256 before leaving your
premises; this gives you protection in transit and at
rest. SteelStore intelligently caches up to 2 TB of recent backups
locally for rapid restoration.

The SteelStore appliance is easy to implement! You can be up and running
in a matter of minutes with the implementation guide, getting started guide, and user guide
that you will receive as part of your download. The appliance is compatible with
over 85% of the backup products on the market, including solutions from
CA, CommVault, Dell, EMC, HP, IBM, Symantec, and Veeam.

To learn more or to apply for this exclusive promotion,
click here!

Jeff;

0
Use AWS OpsWorks & Ruby to Build and Scale Simple Workflow Applications

September 8, 2014 1:28 pm

0

From time to time, one of my blog posts will describe a way to make use of two AWS products or services
together. Today I am going to go one better and show you how to bring the following trio of items in to play
simultaneously:

All Together Now
With today’s launch, it is now even easier for you to build, host, and scale SWF applications in
Ruby. A new, dedicated layer in OpsWorks simplifies the deployment of workflows and activities written
in the AWS Flow Framework for Ruby. By combining AWS OpsWorks and SWF, you can easily set up a
worker fleet that runs in the cloud, scales automatically, and makes use of advanced Amazon Elastic Compute Cloud (EC2) features.

This new layer is accessible from the AWS Management Console. As part of this launch, we are also releasing
a new command-line utility called the runner. You can use this utility to test
your workflow locally before pushing it to the cloud. The runner uses information provided in a
new, JSON-based configuration file to register workflow and activity types, and
start the workers.

Console Support
A Ruby Flow layer can be added to any OpsWorks stack that is running version 11.10 (or newer)
of Chef. Simple add a new layer by choosing AWS Flow (Ruby) from the menu:

You can customize the layer if necessary (the defaults will work fine for most applications):

The layer will be created immediately and will include four Chef recipes that are specific
to Ruby Flow (the recipes are available on
GitHub):

The Runner
As part of today’s release we are including a new command-line utility,
aws-flow-ruby, also known as the runner. This utility
is used by AWS OpsWorks to run your workflow code. You can also use it to test your
SWF applications locally before you push them to the cloud.

The runner is configured using a JSON file that looks like this:

{
  "domains": [{
      "name": "BookingSample",
      "retention_in_days": 10
  }],

  "workflow_workers": [{
     "number_of_workers": 1,
     "domain": "BookingSample",
     "task_list": "workflow_tasklist"
  }],
 
  "activity_workers": [{
    "number_of_workers": 1,
    "domain": "BookingSample",
    "number_of_forks_per_worker": 5,
    "task_list": "activity_tasklist"
  }]
}

Go With the Flow
The new Ruby Flow layer type is available now and you can start using it today. To learn more
about it, take a look at the new OpsWorks section of the
AWS Flow Framework for Ruby User Guide.

Jeff;

0
Query Your EC2 Instances Using Tag and Attribute Filtering

September 2, 2014 9:05 am

0

As an Amazon Elastic Compute Cloud (EC2) user, you probably know just how simple and easy it is to launch EC2 instances
on an as-needed basis. Perhaps you got your start by manually launching an instance or two,
and later moved to a model where you launch instances through a AWS CloudFormation template,
Auto Scaling, or in Spot form.

Today we are launching an important new feature for the AWS Management Console. You can now find the instance or instances
that you are looking for by filtering on tags and attributes, with some advanced options including
inverse search, partial search, and regular expressions.

Instance Tags

Regardless of the manner in which you launch them, you probably want to track the role (development,
test, production, and so forth) internal owner, and other attributes of each instance. This
becomes especially important as your fleet grows to hundreds or thousands of instances. We have
long supported tagging of EC2 instances (and other resources) for many years. As you
probably know already, you can add up to ten tags (name/value pairs) to many types of AWS resources.
While I can sort by the tags to group like-tagged instances together, there’s clearly room to do
even better! With today’s launch, you can use the tags that you assign, along with the
instance attributes, to locate the instance or instances that you are looking for.

Query With Tags & Attributes
As I was writing this post, I launched ten EC2 instances,
added Mode and Owner tags to each
one (supplementing the default Name, and then
configured the console to show the tags and their values:

The new filter box offers many options. I’ll do my best to show them all to you!

In the examples that follow, I will filter my instances using the tags that
I assigned to the instances. I’ll start with simple examples and work up to some more complex ones.
I can filter by keyword. Let’s say that I am looking for an instance and can only recall part of the
instance id (this turns out to be a very popular way to search). I enter the partial id (“2a27”) in to the filter box and press Enter to find it:

Let’s say that I want to find all of the instances where I am listed as Owner. I click in the Filter box
for some guidance:

I select the Owner tag and select from among the values presented to me:

Here are the results:

I can add a second filter if I want to see only the instances where I am the owner and the Mode
is “Production”:

I can also filter by any of the attributes of the instance. For example, I can easily find all of the
instances that are in the Stopped state:

And I can, of course, combine this with a filter on a tag. I can find all of my stopped instances:

I can use an inverse search to find everyone else’s stopped instances (I simply prefix the value with an exclamation mark):

I can also use regular expressions to find instances owned by Kelly or Andy:

And I can do partial matches to compensate for inconsistent naming:

I can even filter by launch date to find instances that are newer or older than a particular
time:

Finally, the filter information is represented in the console URL so that you can bookmark your
filters or share them with your colleagues:

Filter Now
This feature is available now and you can start using it today. It works for EC2 instances now; we
expect to make it available for other types of EC2 resources before too long.

Jeff;

0
AWS Week in Review – August 25, 2014

September 2, 2014 8:21 am

0

Let’s take a quick look at what happened in AWS-land last week:






Monday, August 25
Tuesday, August 26
Wednesday, August 27
Thursday, August 28
Friday, August 29

Stay tuned for next week! In the meantime,
follow me on
Twitter
and
subscribe to the RSS feed.

Jeff;

0
New Resources APIs for the AWS SDK for Java

August 28, 2014 1:20 pm

0

We are launching a preview of a new, resource-style API model for the AWS SDK for Java. I will summarize
the preview here, and refer you to the AWS Java Blog for full information!

The new resource-oriented APIs are designed to be easier to understand and simpler to use. It
obviates much of the request-response verbosity present in the existing model and presents a
view of AWS that is decidedly object-oriented. Instead of exposing all of the methods of the
service as part of a single class, the resource-style API includes multiple classes, each
of which represents a particular type of resource for the service. Each class includes the
methods needed to interact with the resource and with related resources of other types.
Code written to the new API will generally be shorter, cleaner, and easier to comprehend.

Here is the old-school way to retrieve an AWS Identity and Access Management (IAM) group using the
GetGroup function:

AmazonIdentityManagement iam = new AmazonIdentityManagementClient();
iam.setRegion(Region.getRegion(Regions.US_WEST_2));

GetGroupRequest getGroupRequest = new GetGroupRequest("NeedNewKeys");
GetGroupResult getGroupResult = iam.getGroup(getGroupRequest);

And here is the new way:

IdentityManagement iam = ServiceBuilder.forService(IdentityManagement.class)
  .withRegion(Region.getRegion(Regions.US_WEST_2))
  .build();

Group needNewKeys = iam.getGroup("NeedNewKeys");

The difference between the old and the new APIs becomes even more pronounced when more
complex operations are used. Compare the old-school code for marking an outdated
access key (oldKey) for an IAM user as inactive:

UpdateAccessKeyRequest updateAccessKeyRequest = new UpdateAccessKeyRequest()
  .withAccessKeyId(oldKey)
  .withUserName(user.getUserName())
  .withStatus(StatusType.Inactive);
iam.updateAccessKey(updateAccessKeyRequest);

With the new, streamlined code, the intent is a lot more obvious. There’s a lot less in the way
of setup code and the method is invoked on the object of interest instead of on the service:

oldKey.deactivate();

The new API is being launched in preview mode with support for
Amazon Elastic Compute Cloud (EC2), AWS Identity and Access Management (IAM), and Amazon Glacier. We plan to introduce resource APIs for
other services and other AWS SDKs in the future.

Jeff;

PS – To learn more about Resource APIs, read the

full post on the AWS Java Development Blog
.

0
Amazon Zocalo – Now Generally Available

August 27, 2014 8:37 am

0

Amazon Zocalo has been available in a Limited Preview since early July
(see my blog post,
Amazon Zocalo –
Document Storage and Sharing for the Enterprise
to learn more). During the
Limited Preview, many AWS users expressed interest in evaluating Zocalo and were
admitted in to the Preview on a space-available basis.

Today we are making Amazon Zocalo generally available to all
AWS customers. You can sign up today and start using
Zocalo now. There’s a 30-day free trial (200 GB of storage per user for up to 50 users); after
that you pay $5 per user per month
(see the Zocalo Pricing page for more information).

As part of this move to general availability, we are also announcing
that AWS CloudTrail now records calls made to the Zocalo API. This
API is currently internal, but we plan to expose it in the
future. If you are interested in building applications that work
with the Zocalo API, please express your interest by emailing us
at zocalo-api-interest@amazon.com. We
are very interested in learning more about the kinds of applications
that you are thinking about building.

I have become a regular user of Zocalo, and also a big fan! I generally have between
5 and 10 blog post drafts under way at any given time. I write the first draft, upload
it to Zocalo, and share it with the Product Manager for initial review. We iterate on the
early drafts to smooth out any kinks, and then share it with a wider audience for
final review. When multiple reviewers provide feedback on the same document, Zocalo’s
Feedback tab lets me scan, summarize, and respond to the feedback quickly and
efficiently.

Jeff;

0
Now Shipping: CDN Raw Logs to S3 export

August 5, 2014 12:02 am

0
s3

Although our real time analytics API make it extremely easy to process and get the data you need, sometimes you need to parse the data yourself.

Meet our newest feature of MaxCDN Insights that allows you to save all access logs in the format you want to your own S3 bucket.

Every single response from our servers for your pull zone is now accessible to you to do as you please. You can parse the raw data and build your statistics and analytics dashboards or parse it every day and extract very specific information for your own custom needs.

To use it you will need an existing AWS profile and an S3 bucket.

Once you have that you can enable raw logs to s3 export in your settings in our Control Panel

s3-1

You also have the option to select the frequency of log exports into your bucket.

1 hour, 12 hours, 1 day and 3 days are available.

Before enabling it you need to also set a format string. Its a very powerful feature that can help you a lot by saving you space on your AWS account and simplify the parsing of your logs.

We recommend to store only the information you need avoiding using all available variables.

s3-2

The post Now Shipping: CDN Raw Logs to S3 export appeared first on MaxCDN Blog.

0
No status quo in clouds

August 4, 2014 12:08 pm

0

This article was authored by Jouko Ahvenainen, and was originally posted on telecomasia.net.

Amazon (Amazon Web Services, AWS) is the leading enterprise cloud provider. Amazon surprised the market with a larger than expected loss. AWS has had a big impact on the result. What does it mean that the leading cloud company cannot run a profitable business? … [visit site to read more]

0