Posts tagged ‘management’

Akamai University: SSL Certificate Security and Trust

October 7, 2014 4:20 am

0

Akamai Edge 2014 continues today with the second day of Akamai University and API Boot camp. To coincide with this, I’m running three security lessons that are part of an upcoming video series. This is the final installment, and was written by Meg Grady-Troia.


PREVIOUS LESSONS:

SSL Certificate Security and Trust

The Internet is built on a foundation of trust, from machine to machine, extended across the entire surface of the globe. Trust is shared across the Internet in many ways, the SSL certificate hierarchy is only one, albeit a pervasive one. The SSL certificate system was designed so that trusted parties can have private communications over the public Internet. SSL certificates are a critical piece of the Internet’s trust architecture, and many protocols exist to support secure certificate handling.

What is a Certificate?

A certificate is the container for four pieces of information your web browser (or operating system) needs to make a secure connection to the server hosting the website you wish to visit.
Those four pieces are:

1. An “Issued To:” field that specifies the full name and address of the entity that owns the domain you’re visiting (including the IP address & domain name you’re visiting, and the brick & mortar contact for the owning entity).

2. A validity period: The time period (start date and end date) for which that certificate should be considered valid.

3. An “Issued From:” field that contains the signature of a Certificate Authority, that acts like a notary public would on a legal document: a third party witness.

4. A public key: The shareable half of the keypair that will be used by the server to initiate the encryption of data that flows between the website and your browser.

Your browser-client uses the “issued to” data to check that it has connected to the domain it expected. It uses the certificate authority and expiry to verify that it trusts the domain. It uses the public key from the certificate to continue the SSL handshake that will allow all further communication between you and the website to be encrypted.

How do Certificates Work?

Think of SSL certificates as the Internet-equivalent of the diploma granted to a student when they graduate from a school: it may hold value with people who know the recipient but not the school, and it may hold value with people who know the reputation of the school, but not the recipient. The value of the diploma is not a trust currency itself, simply an indication of an existing authenticated relationship.

There are a lot of certificate authorities in the world, and they may be operated by governments, companies, or even individuals (and they range in credibility just like colleges, from diploma mills to prestigious institutions) . This is possible because CAs are initially self-signing: they simply appoint themselves as trustworthy third parties. The value of a CA’s imprimatur depends on its reputation — both past behavior with other certificates, and its relationships with certificate holders and web browser developers — which is how their signatures gain value.

A single web domain — say, www.akamai.com — may have any number of certificates associated with it, and there are many kinds of special certificates online to account for specific use cases.

Some of the most common are:

• Multi-Domain (including Subject Alternative Names (SAN) & Wildcard) Certificates: These certificates cover multiple hostnames, subdomains, or IP addresses, and allow end-users like you to be redirected to the same application from multiple hostnames.
• Validated (including Extended Validation (EV), Organization Validation (OV), and Domain Validation (DV)) Certificates: These certificates require the signing CA to perform some additional identity validation after their standard process, either for an individual, an organization, or a domain. EV Certificates do not offer additional security for your particular session on a website, but they are often considered to be of higher trustworthiness.

When you initiate a private exchange with a web application — for example, your bank’s portal so that you can check your latest statement — your browser-client will request an encrypted session and the server you’re connecting to will respond by presenting its certificate back to your browser to authenticate itself & initialize the negotiations required during the SSL handshake. Your web browser compares that certificate to its certificate store — a list of CAs that the developers of your web browser considered trustworthy — to make sure that the certificate is both signed by a trusted CA and still valid.

Certificates have a longer shelf life than a carton of milk, but because the Internet is a dynamic place, the stated period of validity on a Certificate may end up being a longer period than the certified entity wishes to continue to use it. Certificates can easily become erroneous or compromised for any number of reasons, including when an entity’s contact information changes, or after a successful attack against that entity. You wouldn’t want your front door’s lock to open to both the key from the old lock that was compromised and the key from the new lock, right?

Because of that possibility, the certificate check performed by your browser-client may also include a status call to see if that specific certificate has been revoked — that is, been deemed invalid by the CA or owning entity. While there are several ways to check if a certificate has been revoked, all of them take extra time & effort during the SSL handshake. Not every browser or operating system — particular older or slow ones — will perform any kind of certificate revocation check.

How do Certificates Facilitate Trust Relationships?

Once you and your browser have decided to trust the presented certificate, your browser-client may continue the SSL handshake by providing a public key for the server to use (while your browser will use the public key embedded in the certificate) while they negotiate additional settings for your private session. While a certificate will always contain the same 4 critical pieces of information, newer browser-clients allow for additional controls during the session negotiation process, including ephemeral keys, advanced hash and compression functions, and other security developments. This process of certificate check, key exchange, and session negotiation, in a direct reference to the ways we demonstrate trust in real life, is called an SSL handshake.

How does Akamai Handle SSL Certificates?

Akamai has relationships with several Certificate Authorities, and will use one of its preferred CAs to sign customer certificates if a customer does not request a specific CA when they have Akamai provision a SSL Certificate for them. These preferred CAs are widely-used CAs that are generally recognized by major browsers and operating systems.

Akamai generates the keypairs for all of its customers’ SSL certificates for traffic flowing over Akamai networks, using their designated information and preferred cipher suites and algorithms, so that only the public key ever has to leave the protections of Akamai’s networks. By not sending private keys across the Internet from customer to Akamai, we help to ensure the many needed layers of protections around the SSL Certificate’s private key that may be able to decrypt end-user session data.

Akamai has a relationship with some CAs allowing us to sign certificates for them as an Intermediary CA. In these cases, the chain of trust is extended by additional links, with both the originating — or root — certificate authority granting an intermediary the right to sign certificates on their behalf. This process of tiered certificate authorities signing successive certificates, all of which are presented to the browser-client as a bundle, is often called chaining, just like linking daisies together into a chain.

How are SSL Certificates Vulnerable?

Certificates have a number of protections around them, including file types, cipher suites and algorithms, key usage, procurement and handling procedures, unique identifiers, and other data that are all part of a commonly-accepted standard that help both humans and machines protect, identify, and properly use SSL certificates. That common standard is called X.509, and it is used by common SSL software such as OpenSSL, and in lower-stack operations like TLS.

It’s a common adage in Information Security that complexity in a system increases its risk of accidents, and the certificate hierarchy is byzantine, indeed. There are all sorts of ways that SSL Certificates, the private keys affiliated with SSL Certificates, and your private sessions can still be compromised.

Many organizations on the Internet — including Akamai — are considering a number of possibilities to fortify the SSL certificate structure. Some of the possibilities aim to make the current certificate process more transparent, while others couple the certificate process to other areas of trusted computing, like DNS registries. Each of these potential revisions presents some gains and some losses for end-users and certified entities. Newer browsers and operating systems may support additional controls around the encryption for your session on a website, and updated versions of the X.509 standard and TLS support newer models of authentication and certificate protections.

Every party in the certificate hierarchy is responsible for some aspects of the chain’s security. All of the certificate process I’ve just explained gets conveyed to you, the end user, by the small lock that shows up in your browser’s navigator bar when you’re browsing a website via HTTPS. That lock icon is the simplest symbol of the SSL Certificate trust chain there is, including all the vulnerable infelicities of the system and all of the hope we hold for private communications over the public Internet.

0
Akamai University: FedRAMP 101

October 7, 2014 3:54 am

0

Akamai Edge 2014 continues today with the second day of Akamai University and API Boot camp. To coincide with this, I’m running three security lessons that are part of an upcoming video series. This is the second of three installments, and was written by Akamai program managers James Salerno and Dan Philpott.


FedRAMP 101

This lesson is about FedRAMP, why it was created and why it’s become an important part of Akamai’s security compliance process.

Akamai complies with many industry standards and regulations such as Sarbanes-Oxley (SOX), the PCI Data Security Standard and ISO. FedRAMP — the acronym for the Federal Risk Assessment Management Program — is one of the most recent pieces of our compliance program.

For the US Federal Government to operate a system, the system must be authorized.

For cloud computing, FedRAMP is the mechanism it uses for provisional authorizations to operate (PATOs).

The FedRAMP program is organized by the General Services Administration, which handles most of the project management for the authorization process.

However, the actual PATOs are issued by what’s called the JAB, or Joint Authorization Board. The JAB is made up of the CIOs from the Department of Homeland Security, Department of Defense, and General Services Administration.

Akamai started preparing for its participation in FedRAMP two years ago. After a year and a half of preparation, the JAB granted Akamai it’s PATO.

The FedRAMP authorization process requires Akamai — as a Cloud Service Provider — to document a variety of controls we use to secure the Akamai FedRAMP-scoped systems.

We cover controls detailing our network security, network scanning, host hardening, monitoring, physical security and many more aspects of security.

From there, our controls are tested. Unlike some security assessments, which are simply an annual check, FedRAMP requires continuous monitoring.

If we spot a problem, we are required to fix it. It’s through this process that we assure FedRAMP that our system meets the goals of the program. When we submit our assessment to the JAB, it reviews it and asks questions. Akamai provides answers, and at the conclusion the JAB makes its authorization decision.

The U.S. General Services Administration lists the following goals and benefits of FedRAMP on its website:

Goals:
–Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations
–Increase confidence in security of cloud solutions
–Achieve consistent security authorizations using a baseline set of agreed upon standards to be used for Cloud product approval in or outside of FedRAMP
–Ensure consistent application of existing security practices
–Increase confidence in security assessments
–Increase automation and near real-time data for continuous monitoring

Benefits:
–Increases re-use of existing security assessments across agencies
–Saves significant cost, time and resources – “do once, use many times”
–Improves real-time security visibility
–Provides a uniform approach to risk-based management
–Enhances transparency between government and cloud service providers (CSPs)
–Improves the trustworthiness, reliability, consistency, and quality of the Federal security authorization process

That concludes our lesson for today.

0
AOL Launches A Data Management Platform, Unveils TV Ad Targeting Tech

September 29, 2014 2:34 pm

0

Big Data just got much bigger at AOL. At its second annual “Programmatic Upfront” event in New York on Monday, AOL announced the launch of a data management platform (DMP) with multi-touch
attribution using technology from Convertro, the consumer tracking platform and attribution tech firm AOL acquired in May.

0
Coming Soon: New Security Whiteboard Videos

September 23, 2014 3:40 am

0

Last year, we released a bunch of videos containing security whiteboard lessons on a variety of topics. This Thursday we shoot four new episodes.

Below is a preview of each episode.
  • To see previous security whiteboard videos, go here and here.

Incident Management 101
At every company, Akamai included, incidents happen daily. Despite strong controls, it’s inevitable that problems will arise when — in our case — so much content is being handled, processed and distributed within Akamai and on behalf of customers. Bill Brenner will walk viewers through the incident management process Akamai uses to minimize problems and maintain security.

Vulnerability Assessment vs. Penetration Testing
Vulnerability assessment and pen testing both deal with finding and fixing security holes. But they are not the same thing. Patrick Laverty will walk viewers through the differences between the two.

FedRAMP 101
James Salerno will tell viewers about FedRAMP — why it was created and why it’s become an important part of Akamai’s security and compliance process.

SSL Certificate Security and Trust
Meg-Grady Troia will teach viewers about the SSL certificate system and some of its strengths and weaknesses.

Next month, we’ll shoot a fifth video, where CSO Andy Ellis walks through one of the data centers housing Akamai servers and explains the myriad security procedures in place to protect those deployments.

0
Akamai SOC + PLX SOC + Akamai Cloud Security Solutions = Complete Peace of Mind

September 18, 2014 8:00 am

0
Blog post for 9182014.png

Over the last five months, the services and support management teams from Akamai have been working hard on integrating the Akamai and Prolexic Security Operation Center (SOC). Given the progress that we’ve made along the way, we think it would be timely for us to talk about how this effort from both companies could help our customers against the ever-changing attack sphere.
Renny addressed some of the complementary areas of product offerings between Akamai and Prolexic. In many ways, this is similar to the Prolexic and Akamai SOC as well. While both companies have significant services and support organizations, the Akamai teams focused heavily on configuring the automated capabilities of Akamai’s security products to proactively mitigate attacks whereas the Prolexic SOC was well-renowned for its ability to quickly respond to DDoS attacks when engaged by a customer. Pulling these teams together, along with the concept of automatic protection supplemented with human mitigation, has allowed us to create a powerhouse of security expertise with a gamut of skills, ranging from emergency attack support, implementation and integration, and consulting on a suite of security products. Combining this with the largest service delivery teams that are focused on other Akamai’s products such as site acceleration and media services makes it easy and fast for a customer to ensure that they are getting the most value from their Akamai relationship.

  • Organization: Centralizing the global SOC under a single leader has helped us focus on building SOC expertise while developing consistent processes and workflows, all of which will help provide a high-quality support experience to the customers.
  • Geographical Expansion: While we’re building new SOCs in Europe and Asia to serve the local needs of customers in their local languages, we’re simultaneously growing our 24×7 operations in Florida. Our new centers in Europe and Asia will promote a hybrid ‘follow-the-sun’ model, allowing us to effectively combine ‘local’ touch with ‘centralized efficiency’.
  • Platform: Prolexic and Akamai historically followed different processes when it comes to managing security incidents. We’re picking the best of both worlds by consolidating SOC workflows and applications for ticket and alert management. By doing this, we’re ensuring standard communication protocols, incident management, audit trails, and the operationalizing of routine activities. Having a standard platform globally will help the SOC to prioritize the different activities (routine, proactive, and threat mitigation) while promoting situational awareness throughout the company.
  • Tools: In order to support the gamut of security products and to help the SOC personnel function as effectively as possible, we’re doubling down on our investment in SOC tools. This includes newer and better abilities to isolate attacks, alerting capabilities, gather logs, etc.
  • Operational Metrics: Finally, we’re in the midst of developing a core set of metrics by which we can manage and measure the performance and effectiveness of all the SOCs. This includes separate but related metrics for all of the SOCs activities – provisioning, project management, incident management, customer satisfaction and proactive support.

In summary, this is a very exciting time for us as the leadership team managing security services. While the security landscape is ever-changing and attacks are becoming more sophisticated and damaging, we’re confident that the changes we’ve put in place will enable us to protect our customers more effectively by combining our industry-leading products with our world-class people and expertise. And, by the way, we’re hiring security professionals worldwide – if you want to work in a world-class organization focuses on Internet Security, please check out our

This is a post from Mani Sundaram, Patrice Boffa, and Roger Barranco, leaders of the Global Service Delivery team.

0
Adaptive Media Launches Video Ad Management Platform

September 17, 2014 10:18 am

0

Adaptive Media, a supply-side platform (SSP), this week announced the launch of a video ad management platform for publishers, dubbed Media Graph.

0
Kaltura Wins Best TV Everywhere Award at IBC 2014

September 13, 2014 7:11 am

0
main menu

What a night in Amsterdam!

Kaltura is excited to announce that it won the award for Best TV Everywhere/Multi-Screen Video at IBC 2014! The Kaltura OTT TV team was acknowledged for the KabelKiosk white label IPTV offering (meinFernsehen), a sophisticated second screen deployment for Eutelsat – one of the leading satellite operators in the world. In this project, Kaltura OTT TV allows Eutelsat’s 300 affiliate companies to provide a second screen internet-based TV service to more than 3.5 million German households.

This award comes on the heels of the Tvinci acquisition in May 2014. Tvinci, a leading paid OTT TV company was acquired by Kaltura to create the most comprehensive end to end OTT TV solution. This is the second time that the Tvinci team has won the CSI awards at IBC and its a huge validation of our technology and the exceptional TV experience it offers to users.

The KabelKiosk projects brings to life the three pillars of Kaltura OTT TV:

1. Time-Shited TV – the ability to pause live shows and catch up on thousands of shows aired on Eutelsat’s linear channels.

2. Engagement Tools – users can create their personal profiles, allowing them to get a personalized social feed that includes updates on what their friends are watching, liking, sharing and commenting on. This is done by utilizing Kaltura’s household management capabilities that allow service providers and telcos to manage multiple user profiles within a single household.

3. Metadata Driven Discovery -our strong EPG management capabilities make a huge difference for service providers and telcos because all the linear TV shows are automatically indexed, which creates a massive VOD library based on live channels catch up. In addition, Kaltura’s powerful recommendation engine always suggests the most relevant content so users can rent or buy additional videos.

If you want to check out the KabelKiosk application in action and hear about OTT3, the next generation of the platform – please visit us at the Kaltura booth in IBC (Hall 3, Stand C67). Other than very cool demos, we also serve delicious coffee.

See you on the floor!

0
High power density data centers: 3 essential design elements

September 12, 2014 1:56 pm

0
High-power-density-data-center

When it comes to high power density data centers, all are not created equal. Many customers, particularly those focused on ad tech and big data analytics, are specifically looking for colocation space that can support high power densities of 12+kW per rack. Here at Internap, we have several customers that need at least 17kW per rack, which requires significant air flow management, temperature control and electricity. To put this in perspective, 17kW equates to about 60,000 BTUs, and a gas grill with 60,000 BTUs can cook a pretty good steak in about five minutes.

Delivering super high power density that meets customer demands and ensures tolerable working conditions requires careful planning. When designing high power density data centers, there are three essential elements to consider.

1. Hot aisle vs. cold aisle containment.
To effectively separate hot and cold air and keep equipment cool, data centers use either hot aisle or cold aisle containment. With cold aisle containment, all the space outside the enclosed cold aisle is considered hot aisle, and enough cold air must be pumped across the front side of the servers to keep them cool. However, the hot aisles can become too hot – over 90 degrees – which creates intolerable working conditions for customers who need to access their equipment.

As power densities rise, temperature control becomes even more important. Using true hot aisle containment instead of cold aisle containment creates better working conditions for customers and maintains a reasonable temperature across the entire data center floor. With hot aisle containment, there’s still heat coming from the racks, but you only have to deal with the heat coming from the rack you’re working on at the time, instead of getting roasted by all of them at once. This approach helps avoid the “walking up into the attic” effect for data center technicians.

2. Super resilient cooling systems.
In a typical data center, if the computer room air conditioning (CRAC) units go offline, you have about 10-15 minutes to get the chillers restarted before temperatures start to rise significantly. But when equipment is putting off 36,000 BTUs, you don’t have that luxury. To avoid an oven-like atmosphere, cooling systems must be ultra-resilient and designed with concurrent maintainability, including +1 chillers and separate loops for the entire cooling infrastructure.

Hot aisle containment also makes a cooling outage less painful because the entire data center floor becomes a cool air pocket that can be sucked through the machines, giving you a few extra minutes before things start getting – well, sweaty.

3. Electrical distribution.
Data centers must be designed to support high density power from day one. We have a mobile analytics customer that uses nine breaker positions in a single footprint. You can’t simply add more breaker panels when customers need them; you have to plan ahead to accommodate future breaker requests from the start. Also, breaker positions are used for primary and redundant circuits – more customers than ever are requesting redundant power, so this should also be taken into consideration.

The flexibility of modular design
Internap’s high density data centers are flexible enough to work with custom cabinets if the customer prefers to use their own. As long as the cabinet can be attached to the ceiling and connected to the return air plenum, we can meet the customers’ power density requirements.

Data centers designed to support high power density allow companies to get more out of their colocation footprint. The ability to use rack space more efficiently and avoid wasted space can help address changing needs and save money in the long run. But be sure to choose a data center originally designed to accommodate high power density – otherwise you and your equipment may have trouble keeping cool.

Download the white paper, Future-Proofing Your Data Center Investment with Scalable Density, to learn more about the benefits of high power density data centers.

The post High power density data centers: 3 essential design elements appeared first on Internap.

0
G&D and Verimatrix demonstrate mobile pay TV DRM for smartphones

September 12, 2014 12:00 am

0
company logo

At IBC 2014, Giesecke & Devrient (G&D) and Verimatrix will showcase a digital rights management (DRM) solution for mobile video services that incorporates the high security standard of pay TV.

0
Commando.io: A Geekdom Startup That Simplifies Server Management

September 11, 2014 8:00 am

0

DropBox simplified online storage. GitHub simplified revision control. And Commando.io, member of the Rackspace Startup Program, is looking to simplify server management.

Using the Rackspace Managed Cloud, Commando.io offers its customers a simpler way to manage servers online by making it easy to execute commands on groups of servers from a beautiful web interface. The mantra inside Commando.io is simplicity. Commando.io empowers everybody in a company to interact with servers via an easy to use web interface; including less technical employees such as support and marketing.

“Others in the DevOps space are focused on ever increasing complexity and command line systems,” explains Justin Keller, founder of Commando.io. “Our approach is a beautiful web interface with no external dependencies (agents). ?”

Keller says that other solutions in the space often have a steep learning curve.

“Going with Chef or Puppet require a large time investment, and companies have to send developers off to Chef School for two weeks to learn their systems,” he says. “We don’t ask our users to go to school. They can sign up and start managing servers instantly with their existing stack and scripts.”

[vimeo 73097569 w=500 h=281]

Commando.io walkthrough from Commando.io on Vimeo.

The genesis for Commando.io came from solving a pain point at Keller’s previous startup, NodeSocket, a Platform as a Service (PaaS) for hosting node.js applications.

“Commando.io was the pivot for NodeSocket. We now have thousands of accounts created in over 80 different countries, and, by the end of 2014 our goal is to be profitable.” notes Keller.

With the Commando.io poised for growth, the team takes up headquarters inside Rackspace’s Geekdom San Francisco, a collaborative workspace bringing great people together south of Market Street.

“We really enjoy the relationships with other companies in the space, since they all tend to focus on developer tools or infrastructure,” says Keller. “The office space is also conveniently located in SOMA, and offers great amenities. Finally, they host great events usually a few times a week, from Docker, to tech talks, to investor pitches.”

The Rackspace Startup Program was there to help Commando.io use the Rackspace Managed Cloud to simplify server management. Drop the Startup Team a note and let us know if you need help building your startup.

0
Brightcove Launches Perform to Redefine Video Playback

September 11, 2014 3:36 am

0
Image:

Today at the annual IBC conference, we are excited to announce the launch of our newest product, Perform, a high performance service for creating and managing video player experiences that redefines video playback across devices. A groundbreaking innovation, Perform enables video publishers to improve their speed to market and create high-quality, immersive video experiences across devices.

Perform powers cross-platform video playback with a full set of management APIs, the fastest performance optimization services, and the leading HTML5-first Brightcove Player. It supports HTTP Live Streaming (HLS) video playback across devices, along with analytics integrations, content protection, and both server-side and client-side ad insertion. Additionally, Perform’s plugin architecture allows for control and management of playback features in a discrete and extensible manner.

Loading up to 70% faster than competitive players, including YouTube’s, the underlying Brightcove video player in Perform is the fastest loading player on the market today, according to head-to-head comparisons. The player supports HLS across all major mobile and desktop platforms for simplified workflows and uniform, high quality, cross-platform user experiences. The new Brightcove video player will also be available soon as part of Brightcove’s flagship Video Cloud online video platform.

Developer-friendly and built for the future, Perform is designed to serve the needs of the world’s leading video publishers by plugging into bespoke workflows and working seamlessly with other modular services from Brightcove, such as the Zencoder cloud transcoding and Once server-side ad insertion platforms. Additionally, Perform integrates with the Brightcove Once UX product to provide a powerful hybrid ad solution that combines the advantages of a comprehensive server-side solution with a rich client-side user experience. Utilizing the two products in conjunction enables publishers to optimize monetization of ad-supported video and generate more views and more ad completions while avoiding the challenge posed by the growing popularity of ad blockers.

As the first truly enterprise-grade player available as a standalone service, Perform is a powerful addition to the Brightcove family of products and a key driver in our ongoing mission to revolutionize the way video is experienced on every screen. Visit the website for more information!

0
CBS Content With Upfront Performance And Current VOD Policy For TV Shows

September 10, 2014 1:13 pm

0

TV networks’ weak upfront advertising market was a mixed bag for the business — but not all that bad for CBS, according to Les Moonves, president/chief executive officer for CBS Corp. In-season TV
shows for video on demand may not be in the cards, but management is happy about the network’s upfront performance.

0
Vindico Offers Viewability Measurement Tech To Publishers For Free

September 10, 2014 12:35 pm

0

In a bid to help boost the quality of digital video advertising, video ad management platform Vindico on Wednesday announced that it now allows digital publishers to license its viewability
measurement technology, Adtricity, for free.

0
Does A Hosted Virtualized Environment Require New Tools?

September 10, 2014 8:00 am

0

Many enterprises have invested millions of dollars over the years in IT management, monitoring and automation solutions for their data centers. So a natural question that arises when considering migration of workloads to hosted environments is around management tools. What new capabilities will be required? What new skills will our organization need? Is our existing toolset extensible at all?

The reality is that with the right environment and service provider, workloads and virtual machines in hosted environments can be managed with the same tools being used in the on-premise customer data center. This is particularly true for enterprises that are running virtualized workloads in a VMware environment. In fact, with the right service provider, no new applications or automation tools are required to manage hosted workloads.

Enterprises running VMware workloads that want to leverage their existing toolsets need to look for a service provider that offers the following capabilities:

  1. Hosted VMware environments – first, the service provider must offer customers the ability to run VMware virtualized workloads in their hosted environments.
  2. VMware vCenter Services – next, the service provider must offer the ability to manage hosted virtualized workloads using the VMware vCenter Server Management Console.
  3. Access to VMware vSphere APIs – finally the service provider must also expose the native VMware vSphere APIs to the customer to allow the connection of any compatible VMware or third party tools.

By leveraging the VMware vCenter Server management console across on-premise and hosted VMware environments, enterprises are able to enjoy benefits in the following areas:

Resource Management and Performance Monitoring – by leveraging hosted vCenter services enterprises can manage and schedule resources as if the hosted environment were an extension of the customer data center. Host profiles and configurations and settings can be used across on-premise and hosted environments. In addition, resource allocation rules for CPU, memory, disk and network can be applied across both environments, and common alerts and notifications can be configured.

Process and Workflow Automation – by leveraging hosted VMware vSphere APIs, organizations that currently use VMware vCenter Orchestrator can extend their existing workflows and scripts to workloads running in a hosted VMware environment. This applies not just to out-of-the-box VMware workflows, but also custom scripts and workflows developed by IT administrators.

Extensibility of Existing Applications – with access to hosted vCenter APIs, existing third party and custom applications and scripts can be used with workloads in the hosted VMware environment. Many enterprises rely on third party applications in the VMware partner ecosystem that integrate with vCenter for capacity management, business continuity, performance monitoring and other capabilities. By exposing the same APIs used to manage on-premise virtualized workloads, these same applications can be used for hosted workloads as well. For example, businesses are able to connect third party tools like VMware vCenter Operations Manager (vCOps) to increase visibility into the environment through analytics, as well as assist in capacity and configurations management.

Because no new tools or capabilities are required to manage the hosted VMware environment, enterprises will also find that they can continue to leverage existing IT operations and management skills. By using the right service provider and hosted vCenter services, enterprises can seamlessly manage their on-premise and hosted VMware environments through their existing tools, solutions, processes and people.

This is the third in a series of posts exploring the IT governance and management implications of migration to hosted VMware environments. Stay tuned for our next post featuring a case study on one enterprise that decided to migrate to a hosted virtualized environment.

0
Expanding Our Partner Network With A Master Agent Program

September 8, 2014 9:00 am

0

This year, we’ve doubled down on helping our partners harness the power of the Managed Cloud and Fanatical Support for their clients. We have worked to empower our partners to offer their customers Rackspace’s superior cloud infrastructure and our suite of managed services that eliminate the need for their end customers to worry about the resources and management required to do it themselves.

As part of our continued effort to better serve our partners, we’re expanding our Rackspace Partner Network to include a new Master Agent/Agent Program, a program through which a new segment of partners can offer Rackspace Managed Cloud and award-winning Fanatical Support to their end customers.

Master Agents and Agents have historically been a key channel for reselling of traditional telco services like telephony and connectivity. As the industry evolves to embrace the cloud, these same Master Agents and Agents have proven to be fast adopters of cloud technologies with clients that are seeking out options to transform their businesses to the cloud. Through this new partner program, Rackspace will provide the highest level of support to ensure that an agent’s clients enjoy a smooth cloud implementation and support experience, which ultimately will enable agents to focus on building their business and engaging with clients.

Partners that join the Master Agent/Agent Program will have the opportunity to leverage a number of incentives, including market competitive compensation, pre- and post-sale resources, training and sales enablement and more.

We’re here to help you bring your clients into the Managed Cloud era and offer them the high-touch support they crave.

For more information on the new Rackspace Master Agent/Agent Program, email us at partnermarketing@rackspace.com.

0
Next-gen subscriber management: Helping operators monetize the TV Everywhere landscape

September 4, 2014 10:45 am

0
PayWizard

The explosion of video-enabled, Internet-connected devices means the global pay-TV market is undergoing a significant evolution; it is predicted that by 2019 there will be 1.1 billion pay-TV subscribers.* With the rise of new platforms, devices and distribution models, it is becoming a strategic imperative for operators to embrace the TV Everywhere vision. Behind the scenes many are evaluating how to align subscriber provisioning, billing and payment whilst creating compelling experiences for an increasingly diverse audience.

As a result, operators are facing three distinct challenges in the new pay-TV landscape:

  • Pay-TV Everywhere: Extending services across multiple devices, and allowing consumers to access and pay for the content they want, anytime, and anywhere is proving to be a major challenge for pay-TV platforms, Cable/MSOs and VOD service providers..
  • Platform Interoperability: Combining internal and external infrastructure to deliver pay-TV services across as many distribution methods as possible, whilst integrating various delivery and payment platforms to ensure content flows and data is processed seamlessly, is not an easy feat.
  • Payment Anywhere: With more content available than ever before, consumers expect to purchase and subscribe to the services they want, no matter which payment option is preferred. As pay-TV innovators expand across geographic boundaries, they must contend with taking payments in 186 international currencies, from 4 major global credit cards and dozens of e-wallets, as well as the support for local payments.

What pay-TV operators need to overcome these challenges is a next generation subscriber management system (SMS) — a system that provides centralised multi-device subscription management, systems interoperability and flexible payment services across multiple technical platforms to enable the delivery of compelling TV Everywhere solutions in a highly productised and quick time to market.

First and foremost, a next generation SMS will provide streamlined access for consumers across any device used to sign up for services in any location. The system will manage different device profiles, limitations and user authentication practices; as well as validate billing and process payment across a diverse range of debit, credit and online payment services.

Additionally, the SMS will efficiently interact with a combination of technologies, data exchange standards, APIs and pay-TV ecosystem platforms such as Conditional Access (CA), Digital Rights Management (DRM), Online Video Players. It will act as a bridge between different platforms to perform the necessary transformational steps from customer registration to content viewing, and will enable elements such as subscriber signup, billing and payment processing to be agnostic of underlying technology restrictions.

Finally, a next generation SMS must support international payments options. It will offer payment methods in hundreds of currencies worldwide, including pre and post pay models, credit and debit card processing, direct debits, and local payment options. It will also have mobile phone operator integration, cash economy options powered via voucher, and white-label and branded e-wallets. With the click of a button, the SMS will aim to allow the customer to be exposed to a world of content using a payment method convenient to them.

Ultimately, a next generation SMS will enable pay-TV operators and other media companies to truly monetize the TV everywhere experience, overcoming the three unique industry problems of pay-TV everywhere, platform interoperability and payment anywhere . For more information on how to overcome the challenges of subscriber management in the TV everywhere landscape, download the latest PayWizard whitepaper ‘Pay-TV on every device and any platform’ here.
* Source ABI Research, Digital TV Research

jamie_PayWizard_Videomind

Jamie Mackinlay is Commerical Director at PayWizard where he’s responsible for sales, strategic planning and managing key client relationships.

Categories:
0
What a Broken Arm Teaches Us About Incident Response

August 28, 2014 1:18 pm

0

I originally wrote this for CSOonline’s Salted Hash blog in 2011. But given all my focus on incident management of late, a re-share seems appropriate.

You might find it weird that I’d find a teachable infosec moment in my son breaking his arm. But he did do it at a security meet-up, after all.

Let me explain:

On the last Saturday of October, we drove an hour north to Nottingham N.H. for an outdoor gathering of some friends in the security industry (the #NHInfoSecTweetup, to be specific).

The day was already not going to plan. A freak October snowstorm was bearing down on New England and when we got to the campground it was freezing and gray. Son number-one got out of the car and puked in the parking lot, a victim of car sickness. Within five minutes, we’d be making a hasty exit from the park for another reason.

Son number-two was delighted to find they had a playground, and ran for the monkeybars. Before I could finish introducing myself to everyone there, he slipped and landed on his wrist, breaking bones in two places.

We spent the afternoon at Exeter Hospital and the staff was terrific. They quietly moved Duncan to the front of the line (you should never leave an 8-year-old sitting in agony, after all) and got him x-rayed. They had to take him to the operating room to re-set the bones and now he’s walking around with an enormous splint on his arm.

We left the hospital after 5 p.m. and drove the hour or so home in near-whiteout conditions — a downright surrealistic scenario for New England in October.

What does any of this have to do with security? Running a business is like running a family. Unforseen accidents happen and you’re forced to change plans in a snap-second. It’s a teachable moment for companies that are trying hard to prevent data security breaches.

Just as kids will break bones from time to time, companies will suffer some kind of security lapse. No matter how careful you are as a parent or as a business owner, the unexpected will still throw you off step.

But it doesn’t have to throw us into chaos.

In hindsight, we reacted well to our incident. The folks at the security meet-up helped us get our stuff to the car and we whisked the boy to the nearest ER. Everyone was calm, and we got the bones reset and the arm in a splint.

Since I write about security for a living, it’s hard for me not to create security analogies in my head whenever life gets interesting. This was one of those cases.

I thought about it in incident response terms. Had we panicked, I would have driven too fast to the ER and wrapped the car around a tree. My wife and sons would have been at much greater risk.

We didn’t panic, and everything turned out fine.

To me, that’s the ultimate lesson for security practitioners dealing with an incident.

Panic and the security hole grows bigger, along with the severity of the blowback when it’s all revealed. React calmly and you can quickly get to fixing the problem and preparing those you do business with for the news.

Businesses actually have an advantage. Incident response plans can be drawn up well in advance and put on the shelf for emergency use.

When kids get into accidents, you have to wing it a lot more.

0
Account for Risk in your ROI for Web Application Firewalls

August 26, 2014 12:57 pm

0

Earlier this week, we published a new white paper titled, “Weighing Risk Against the Total Cost of a Data Breach,” on Akamai.com. Ordinarily, a white paper wouldn’t be a particularly interesting subject for a blog post, but this one explores a topic that has generated a lot of questions from our customers – how do I financially justify a Web application firewall solution to my management?
We normally get this question from technology people who know that they need a solution to protect their Web applications against bad things like SQL injections, cross-site scripting, or remote file inclusions, but don’t know how to tie that protection to the business goals that their upper management cares about. This question is particularly vexing because a Web application firewall doesn’t follow the same ROI model that our customers are used to using when evaluating a technology solution. A Web application firewall doesn’t increase revenue, productivity, or customer engagement. Nor does it reduce CAPEX or OPEX in a regular, predictable manner.

What a Web application firewall does do is reduce risk. It reduces the risk of a harmful event occurring – in this case, of a data breach that can present a financial cost several orders of magnitude greater than of the solution itself. The white paper dives into all of the different sources that can contribute to that cost and offers a simple (and industry-accepted) formula to estimate it up front.

Does it provide an exact calculation of those costs? No – we’ve found that this is different for every customer and varies between industries, size of organization and region or geography. For example, in the US (and in Europe), the costs are particularly high, while in Asia the costs are more contained but seem to be rising.

Does implementing a solution guarantee that a data breach will never occur? Again, no – Bill Brenner recently made a great post that, while tongue-in-cheek, tried to explain that no security solution is ever 100 percent effective. In addition, we’ve seen that attackers utilize a variety of methods to get past IT defenses, including social engineering tactics like spear phishing, malware installed at the point of sale, as well as exploiting vulnerabilities of Web applications. However, Verizon’s 2014 Data Breach Investigations Report showed that more data breaches went through the Web application in 2013 (35 percent) than any other category, making it the largest risk to organizations and the area that we recommend our customers address first.

What the white paper does is present a method through which you can estimate the financial cost of a business-threatening event against your organization, allowing you to then weigh that against the cost of a solution and the risk that such an event will occur. This can be a great resource to help justify the purchase of a Web application firewall that can help you better protect your data. Because at the end of the day, a Web application firewall is all about reducing the risk and possible financial impact of a data breach, and having a better understanding of the financial impact and a sound method to estimate it upfront can only lead to a more informed decision.

0
3 Enterprises Growing In A Cloudy World

August 4, 2014 8:00 pm

0

For mature enterprises, the cloud represents a way to get out from under the constraints of traditional IT. Unlike businesses born in the cloud, the existing systems and business processes in these organizations present unique challenges to cloud adoption. The company examples below offer three different stories of traditional IT teams growing into the cloud with measurable success.

Revlon: Unexpected Benefits in the Cloud

Simplicity was the theme behind Revlon’s cloud transformation. The iconic cosmetics company needed to consolidate operations to reduce latency between global offices, located on every continent except Antarctica.

It built and deployed its own private cloud. The implementation allowed Revlon to reduce hardware consumption from one physical to one virtual server to one physical server to 35 virtual servers. Today, Revlon handles 14,000 transactions per second, 15 automated application moves each month and a weekly backups up to 40TB.

And the cloud deployment came with unintended benefits according David Giambruno, Revlon CIO in a podcast interview, “As we put everything on our cloud, we realized that all of our data sits in one place now. So when you think of big data management, we’ve been able to solve the problem by classifying all the unstructured data in Revlon. We have the ability to look at all of our data, a couple of petabytes, in the same place….So when we’re trying to query the data, we already know where it is and what it does in its relationships, instead of trying to mine through unstructured data and make reasoning out of it.”

The cloud move for Revlon took years to accomplish and required a complete restructuring of IT that resulted in “taking the infrastructure out of the way so we can focus on what people want to do faster, cheaper, better,” says Giambruno. As a result Revlon has seen a 420 percent increase in project throughput without adding additional cost.

Alex & Ani: Massive Scale with a Small Team

In 2004, Carolyn Rafaelian, Founder, Creative Director and CEO of Alex & Ani took her experience working with her 40-plus year-old family jewelry business to launch her line of unique jewelry creations. The business features patented, expandable wire bangles adorned with meaningful charms.

Over the last decade, it’s expanded from a single store into a multichannel retailer, growing profits from $4.5 million to $230 million between 2010 and 2013. To support its hyper-growth, it needed flexible infrastructure to support rapidly scaling. But Alex & Ani wanted to focus on its retail operation, not turn into technology company. By selecting a provider-managed cloud, the company has managed spikes, supported a Magento platform and run complex data analytics – all without making massive investments in staff and hardware.

It chose a combination of dedicated servers and Rackspace Cloud Servers to mitigate hardware investments and have a single service provider for the technical guidance and under-the-hood maintenance needed to make it all work. Unlike Revlon, Alex & Ani didn’t have to worry about the infrastructure tweaks and staffing requirements needed to get up and running.

“Rackspace integrates and has dedicated teams for the best-of-breed partners that we work with, from Magento and eBay to Adobe and Akamai. Not only that, but they have subject matter experts for those particular tools. Often times we found that individual Rackspace employees had even worked at those companies in the past, which is exciting, because it means they know how the tools work—they know what it’s like under the hood and can advise us appropriately,” says Ryan Bonifacino Vice President of Digital Strategy, Alex and Ani.

Jordan Lawrence

Jordan Lawrence, a 25-year-old records management firm, used a similar approach to transition to the cloud one project at a time. Initially, Jordan Lawrence used the Rackspace Managed Cloud for testing on a secure FTP site, synchronizing customer data and email lists with a proprietary policy distribution application. That success led to the implementation of additional cloud services.

“After getting comfortable with Rackspace, we moved our internal email services to Rackspace Email and Hosted Exchange. Having these services completely hosted and managed by Rackspace allows our technical resources to concentrate on our core businesses,” says Marty Hansen, Executive VP of Technology with Jordan Lawrence.

After that successful move, Jordan Lawrence added Managed Virtualization. As a result, “We are able to host our SaaS services in an environment that is secure enough to pass information security audits from even our most stringent Fortune 100 and Financial Service customers, and meets SSAE16 standards,” shares Hansen.

With all of the cloud triumphs under its belt, Jordan Lawrence is ready to expand into more areas. Hansen says, “We see a complete solution in which Jordan Lawrence develops retention rules for SharePoint, while Rackspace implements and hosts the SharePoint environment for our customers.”

_____________

What does your cloud journey look like? From enterprises to small businesses, no two paths are the same and no business has to go it alone. With Managed Cloud services, businesses can get the guidance and tech resources needed to move away from cumbersome legacy systems and start innovating on the cloud.

0
Scaling Cloud Infrastructure Despite Brain Drain And Ad Hoc Processes

August 4, 2014 3:00 pm

0

By Matt Juszczak, Founder, Bitlancer

Your lean technology startup is gaining serious traction! Sales and marketing are steadily growing the user base. Meanwhile the operations team, armed with an assortment of scripts and ad hoc processes held together by library paste and true grit, is challenged to scale the cloud-based IT infrastructure to handle the increasing load. So you hire a software engineer with lots of cloud experience to build out a stable configuration management environment.

For a few months all is well, until that engineer abruptly splits. Now operations is left holding a bag of stale donuts. It turns out that nobody else on the team really knows how to use the configuration management tool. They only know how to spin up new virtual servers like they’d been shown. With each person already doing the work of many, nobody has the bandwidth to first learn the tool and then manage the various configurations left lying around.

You know what happens next, right? As development makes changes to the software, operations has to manually install the new virtual code and tweak the configurations manually every time they create a new server. Equally problematic, virtual server instances proliferate unchecked, leading to thousands of dollars in unexpected charges from the cloud service provider. This leads to a slash-and-burn server destruction scramble that accidentally takes down the production platform and disgruntles many of those new users. Heavy sighs emanate from the CTO’s office. What’s to be done?

This is where Bitlancer Strings rides in to save the day. Strings is truly easy to use and provides whatever level of visibility into the virtual infrastructure you desire. With a little support, most teams can migrate their cloud configuration management onto Strings quickly—some in as little as a week.

The benefits of adopting Strings in this scenario include:

  • Heaps of scarce developer time saved by the efficiency and ease of use of the Strings platform.
  • Near-instantaneous time-to-value and predictable, affordable monthly costs. No more surprise charges from the CSP.
  • Best of all, scalability concerns can be alleviated! Peace (whatever that means for a startup) is restored.

If your team is hampered by this kind of technical debt, Strings can help. Strings gives you everything you need to deploy, configure and manage your virtual servers, deploy applications, handle user authentication and SSH keys, and more.

Standards-based Strings works seamlessly with Rackspace services, as Bitlancer customers can attest. Strings also integrates with Rackspace Cloud Monitoring and the Rackspace Cloud Files backup service. Plus it can manage Rackspace Cloud Load Balancers with ease—all of which helps you get even more value from your Rackspace relationship.

To find out more about how Bitlancer Strings can quickly and cost-effectively turn your virtual configuration management pain into gain, visit us in the Rackspace Marketplace today.

This is a guest post written and contributed by Matt Juszczak, founder of Bitlancer, a Rackspace Marketplace partner. Bitlancer Strings is a cloud automation Platform-as-a-Service that radically simplifies the management of virtual servers and other cloud infrastructure, enabling startups and SMBs to preserve time, money and sanity in the cloud.

0