Posts tagged ‘server’

Akamai University: SSL Certificate Security and Trust

October 7, 2014 4:20 am


Akamai Edge 2014 continues today with the second day of Akamai University and API Boot camp. To coincide with this, I’m running three security lessons that are part of an upcoming video series. This is the final installment, and was written by Meg Grady-Troia.


SSL Certificate Security and Trust

The Internet is built on a foundation of trust, from machine to machine, extended across the entire surface of the globe. Trust is shared across the Internet in many ways, the SSL certificate hierarchy is only one, albeit a pervasive one. The SSL certificate system was designed so that trusted parties can have private communications over the public Internet. SSL certificates are a critical piece of the Internet’s trust architecture, and many protocols exist to support secure certificate handling.

What is a Certificate?

A certificate is the container for four pieces of information your web browser (or operating system) needs to make a secure connection to the server hosting the website you wish to visit.
Those four pieces are:

1. An “Issued To:” field that specifies the full name and address of the entity that owns the domain you’re visiting (including the IP address & domain name you’re visiting, and the brick & mortar contact for the owning entity).

2. A validity period: The time period (start date and end date) for which that certificate should be considered valid.

3. An “Issued From:” field that contains the signature of a Certificate Authority, that acts like a notary public would on a legal document: a third party witness.

4. A public key: The shareable half of the keypair that will be used by the server to initiate the encryption of data that flows between the website and your browser.

Your browser-client uses the “issued to” data to check that it has connected to the domain it expected. It uses the certificate authority and expiry to verify that it trusts the domain. It uses the public key from the certificate to continue the SSL handshake that will allow all further communication between you and the website to be encrypted.

How do Certificates Work?

Think of SSL certificates as the Internet-equivalent of the diploma granted to a student when they graduate from a school: it may hold value with people who know the recipient but not the school, and it may hold value with people who know the reputation of the school, but not the recipient. The value of the diploma is not a trust currency itself, simply an indication of an existing authenticated relationship.

There are a lot of certificate authorities in the world, and they may be operated by governments, companies, or even individuals (and they range in credibility just like colleges, from diploma mills to prestigious institutions) . This is possible because CAs are initially self-signing: they simply appoint themselves as trustworthy third parties. The value of a CA’s imprimatur depends on its reputation — both past behavior with other certificates, and its relationships with certificate holders and web browser developers — which is how their signatures gain value.

A single web domain — say, — may have any number of certificates associated with it, and there are many kinds of special certificates online to account for specific use cases.

Some of the most common are:

• Multi-Domain (including Subject Alternative Names (SAN) & Wildcard) Certificates: These certificates cover multiple hostnames, subdomains, or IP addresses, and allow end-users like you to be redirected to the same application from multiple hostnames.
• Validated (including Extended Validation (EV), Organization Validation (OV), and Domain Validation (DV)) Certificates: These certificates require the signing CA to perform some additional identity validation after their standard process, either for an individual, an organization, or a domain. EV Certificates do not offer additional security for your particular session on a website, but they are often considered to be of higher trustworthiness.

When you initiate a private exchange with a web application — for example, your bank’s portal so that you can check your latest statement — your browser-client will request an encrypted session and the server you’re connecting to will respond by presenting its certificate back to your browser to authenticate itself & initialize the negotiations required during the SSL handshake. Your web browser compares that certificate to its certificate store — a list of CAs that the developers of your web browser considered trustworthy — to make sure that the certificate is both signed by a trusted CA and still valid.

Certificates have a longer shelf life than a carton of milk, but because the Internet is a dynamic place, the stated period of validity on a Certificate may end up being a longer period than the certified entity wishes to continue to use it. Certificates can easily become erroneous or compromised for any number of reasons, including when an entity’s contact information changes, or after a successful attack against that entity. You wouldn’t want your front door’s lock to open to both the key from the old lock that was compromised and the key from the new lock, right?

Because of that possibility, the certificate check performed by your browser-client may also include a status call to see if that specific certificate has been revoked — that is, been deemed invalid by the CA or owning entity. While there are several ways to check if a certificate has been revoked, all of them take extra time & effort during the SSL handshake. Not every browser or operating system — particular older or slow ones — will perform any kind of certificate revocation check.

How do Certificates Facilitate Trust Relationships?

Once you and your browser have decided to trust the presented certificate, your browser-client may continue the SSL handshake by providing a public key for the server to use (while your browser will use the public key embedded in the certificate) while they negotiate additional settings for your private session. While a certificate will always contain the same 4 critical pieces of information, newer browser-clients allow for additional controls during the session negotiation process, including ephemeral keys, advanced hash and compression functions, and other security developments. This process of certificate check, key exchange, and session negotiation, in a direct reference to the ways we demonstrate trust in real life, is called an SSL handshake.

How does Akamai Handle SSL Certificates?

Akamai has relationships with several Certificate Authorities, and will use one of its preferred CAs to sign customer certificates if a customer does not request a specific CA when they have Akamai provision a SSL Certificate for them. These preferred CAs are widely-used CAs that are generally recognized by major browsers and operating systems.

Akamai generates the keypairs for all of its customers’ SSL certificates for traffic flowing over Akamai networks, using their designated information and preferred cipher suites and algorithms, so that only the public key ever has to leave the protections of Akamai’s networks. By not sending private keys across the Internet from customer to Akamai, we help to ensure the many needed layers of protections around the SSL Certificate’s private key that may be able to decrypt end-user session data.

Akamai has a relationship with some CAs allowing us to sign certificates for them as an Intermediary CA. In these cases, the chain of trust is extended by additional links, with both the originating — or root — certificate authority granting an intermediary the right to sign certificates on their behalf. This process of tiered certificate authorities signing successive certificates, all of which are presented to the browser-client as a bundle, is often called chaining, just like linking daisies together into a chain.

How are SSL Certificates Vulnerable?

Certificates have a number of protections around them, including file types, cipher suites and algorithms, key usage, procurement and handling procedures, unique identifiers, and other data that are all part of a commonly-accepted standard that help both humans and machines protect, identify, and properly use SSL certificates. That common standard is called X.509, and it is used by common SSL software such as OpenSSL, and in lower-stack operations like TLS.

It’s a common adage in Information Security that complexity in a system increases its risk of accidents, and the certificate hierarchy is byzantine, indeed. There are all sorts of ways that SSL Certificates, the private keys affiliated with SSL Certificates, and your private sessions can still be compromised.

Many organizations on the Internet — including Akamai — are considering a number of possibilities to fortify the SSL certificate structure. Some of the possibilities aim to make the current certificate process more transparent, while others couple the certificate process to other areas of trusted computing, like DNS registries. Each of these potential revisions presents some gains and some losses for end-users and certified entities. Newer browsers and operating systems may support additional controls around the encryption for your session on a website, and updated versions of the X.509 standard and TLS support newer models of authentication and certificate protections.

Every party in the certificate hierarchy is responsible for some aspects of the chain’s security. All of the certificate process I’ve just explained gets conveyed to you, the end user, by the small lock that shows up in your browser’s navigator bar when you’re browsing a website via HTTPS. That lock icon is the simplest symbol of the SSL Certificate trust chain there is, including all the vulnerable infelicities of the system and all of the hope we hold for private communications over the public Internet.

5 ways to prepare for skyrocketing data center storage needs

September 25, 2014 1:16 pm


Data center storage requirements are changing quickly as a result of the increasing volumes of big data that must be stored, analyzed and transmitted. The digital universe is doubling in size every two years, and will grow by a factor of 10 between 2013 and 2020, according to the recent EMC Digital Universe study. So, clearly storage needs are skyrocketing.

Fortunately (for all of us buyers out there), the cost per gigabyte of storage is falling rapidly, primarily because disk and solid state drives continue to evolve to support higher areal densities. Alas, the volume of data being stored seems to be outpacing our ability to cram more magnetic bits (or circuits in the case of flash) per nanometer of surface area.

So clearly, storage costs are likely to become a larger component of overall IT budgets in the coming years. Here are five things to consider when planning for your future storage needs.

1. High power density data centers
With increasing storage needs and a greater sophistication of the storage devices in use, power needs for each square foot in a data center are increasing rapidly. As a result, high power density design is a critical component of any modern data center. For example, if an average server rack holds around 42 servers and each of those servers uses 300W of power, the entire rack will require 12-13kW in a space as small as 25 square feet. Some data center cabinets can be packed with even more servers; for example, some blade server systems can now support more than 10x the number of servers that might exist in an average rack. This increasing demand for higher power density is directly related to the need for higher storage densities in data centers.

2. Cost-efficient data center storage
Choosing an energy-efficient data center from the start can help control costs in the long run. Facilities designed for high density power can accommodate rising storage needs within a smaller space, so you can grow in place without having to invest in a larger footprint.

Allocating your storage budget across different tiers is another way to help control costs. Audit your data to determine how it is used, and how often particular files are accessed during a given period, and categorize the data into tiers so that the type of data is matched with the appropriate storage type. The most-accessed data will require a more expensive storage option while older, less-accessed data can be housed in less-expensive storage. Some examples of different storage types, from most to least expensive, include RAM, solid state drive, spinning hard disk drives (SATA or SAS drives) and tape backup.

3. Scalability
Infrastructure should be designed with scalability in mind; otherwise, costs can become unmanageable and possibly result in poor performance or even outages. Scalability allows you to grow your infrastructure at a pace that matches the growth in data, and also gives you the ability to scale back if needed. Distributed or “scale-out” architectures can provide the perfect foundation for multi-petabyte storage workloads because of their ability to quickly expand and contract according to compute, storage, or networking needs. Also, a hybrid infrastructure that connects different types of environments can enable customers to migrate data between cloud and colocation; if an unexpected need for storage occurs, customers can then shift their budget between opex and capex if needed.

4. Security
Strict security or compliance requirements for data, particularly for companies in the healthcare or payment processing industries, can increase the complexity of data management and storage processes. For example, some data need to be held in dedicated, 3rd party-audited environments and/or fully encrypted at rest and in motion.

5. Backup and replication
When planning your infrastructure, it must support backup and replication in addition to your application requirements. Online backup handles unpredictable failures like natural disasters, while replication deals with predictable hardware failures that may occur during planned maintenance. Establishing adequate replication and backup requirements can more than double the storage needs for your application.

Your data center storage needs will continue to increase over time, as the digital universe continues to expand in alignment with Moore’s Law. Careful planning is required to create a cost-efficient, secure, reliable infrastructure that can keep up with the pace of data growth. Service providers can draw on their experience to help you find the right storage options for different storage needs.

The post 5 ways to prepare for skyrocketing data center storage needs appeared first on Internap.

Rackspace::Solve New York: Docker Examines The Future Of Applications

September 11, 2014 1:00 pm


The time has come to change the way we create, develop and ship applications. At Docker, we believe it should be quick and painless to ship application workloads across environments (dev, test and production) and hosts (laptops, data centers and clouds). Docker container technology is quickly pushing us toward that reality.

Docker eliminates the friction between environments to provide faster app delivery, infinite deployment flexibility and improved service levels.


Before Docker, developers and sysadmins needed to take an application, combine it with an operating system and emulate the entire server to make it portable. Now, there’s a simpler way: dockerizing the app. Docker leverages underlying container technology to ensure containers are simple to use, portable between environments and accessible to developers and sysadmins. To provide tools and resources for increased collaboration, Docker recently established a cloud-based registry known as Docker Hub. With Docker containers and the work we’re doing with our partners, like Rackspace, we’re making it possible to build, ship and run any app, anywhere.

Want to hear more about how Docker is changing the way we build, ship and run apps? Ken Cochrane, Engineering Manager at Docker, will present “The Future of Applications” at Rackspace::Solve New York, a one-day summit where you’ll hear directly from companies like Docker about how they’re solving tough challenges in their businesses. Rackspace::Solve New York is Thursday, September 18 at Cipriani Wall Street.

Register now for Rackspace::Solve New York.

And stay tuned for details of the next Rackspace::Solve event in Chicago.

0 A Geekdom Startup That Simplifies Server Management

September 11, 2014 8:00 am


DropBox simplified online storage. GitHub simplified revision control. And, member of the Rackspace Startup Program, is looking to simplify server management.

Using the Rackspace Managed Cloud, offers its customers a simpler way to manage servers online by making it easy to execute commands on groups of servers from a beautiful web interface. The mantra inside is simplicity. empowers everybody in a company to interact with servers via an easy to use web interface; including less technical employees such as support and marketing.

“Others in the DevOps space are focused on ever increasing complexity and command line systems,” explains Justin Keller, founder of “Our approach is a beautiful web interface with no external dependencies (agents). ?”

Keller says that other solutions in the space often have a steep learning curve.

“Going with Chef or Puppet require a large time investment, and companies have to send developers off to Chef School for two weeks to learn their systems,” he says. “We don’t ask our users to go to school. They can sign up and start managing servers instantly with their existing stack and scripts.”

[vimeo 73097569 w=500 h=281] walkthrough from on Vimeo.

The genesis for came from solving a pain point at Keller’s previous startup, NodeSocket, a Platform as a Service (PaaS) for hosting node.js applications.

“ was the pivot for NodeSocket. We now have thousands of accounts created in over 80 different countries, and, by the end of 2014 our goal is to be profitable.” notes Keller.

With the poised for growth, the team takes up headquarters inside Rackspace’s Geekdom San Francisco, a collaborative workspace bringing great people together south of Market Street.

“We really enjoy the relationships with other companies in the space, since they all tend to focus on developer tools or infrastructure,” says Keller. “The office space is also conveniently located in SOMA, and offers great amenities. Finally, they host great events usually a few times a week, from Docker, to tech talks, to investor pitches.”

The Rackspace Startup Program was there to help use the Rackspace Managed Cloud to simplify server management. Drop the Startup Team a note and let us know if you need help building your startup.

Brightcove Launches Perform to Redefine Video Playback

September 11, 2014 3:36 am


Today at the annual IBC conference, we are excited to announce the launch of our newest product, Perform, a high performance service for creating and managing video player experiences that redefines video playback across devices. A groundbreaking innovation, Perform enables video publishers to improve their speed to market and create high-quality, immersive video experiences across devices.

Perform powers cross-platform video playback with a full set of management APIs, the fastest performance optimization services, and the leading HTML5-first Brightcove Player. It supports HTTP Live Streaming (HLS) video playback across devices, along with analytics integrations, content protection, and both server-side and client-side ad insertion. Additionally, Perform’s plugin architecture allows for control and management of playback features in a discrete and extensible manner.

Loading up to 70% faster than competitive players, including YouTube’s, the underlying Brightcove video player in Perform is the fastest loading player on the market today, according to head-to-head comparisons. The player supports HLS across all major mobile and desktop platforms for simplified workflows and uniform, high quality, cross-platform user experiences. The new Brightcove video player will also be available soon as part of Brightcove’s flagship Video Cloud online video platform.

Developer-friendly and built for the future, Perform is designed to serve the needs of the world’s leading video publishers by plugging into bespoke workflows and working seamlessly with other modular services from Brightcove, such as the Zencoder cloud transcoding and Once server-side ad insertion platforms. Additionally, Perform integrates with the Brightcove Once UX product to provide a powerful hybrid ad solution that combines the advantages of a comprehensive server-side solution with a rich client-side user experience. Utilizing the two products in conjunction enables publishers to optimize monetization of ad-supported video and generate more views and more ad completions while avoiding the challenge posed by the growing popularity of ad blockers.

As the first truly enterprise-grade player available as a standalone service, Perform is a powerful addition to the Brightcove family of products and a key driver in our ongoing mission to revolutionize the way video is experienced on every screen. Visit the website for more information!


September 10, 2014 12:11 pm

WCDigitalFootprint 1.png

FIFA World Cup 2014 was one of the largest multimedia sporting events in history . In-person attendance was estimated at more than three and a half million while hundreds of millions of viewers tuned in via TV, Internet, and radio. Akamai’s online traffic statistics estimate this year’s event to be ten times larger than the 2010 World Cup in South Africa, and two and a half times larger than the Sochi Winter Olympics. In my role as Akamai’s Senior Director of Environmental Sustainability I was curious about the carbon footprint of such a large event, and how digital and analog attendance compared.

Figure 1. Relative amount of traffic that was delivered for four major sporting events with global appeal. Source: Akamai Technologies.

Turns out, FIFA has a green side beyond it soccer fields. In a concerted effort to reduce the environmental impact of staging the World Cup it developed the 2014 FIFA World CupTM Sustainability Strategy. As part of that strategy FIFA calculated the carbon footprint of the six weeks-long tournament including construction and operations of the match stadiums and FIFA Fan Fest venues, and team and spectator travel and accommodations. It was estimated at 2.5 million metric tons CO2 equivalent. That’s the equivalent of driving a U.S. car 7.4 billion miles or flying 9.2 billion miles, a fit analogy since international and inter-/intra-city transportation represented 84% of the 2.5 million.

Figure 2. 2014 FIFA World Cup GHG emissions by activity type.
Source: Summary of the 2014 FIFA World Cup Brazil Carbon Footprint

Certainly there would be no World Cup without stadiums to play in and teams to compete. And it wouldn’t be nearly as exhilarating without the frenzied fans cheering from the stands. But what is the carbon impact of bringing the World Cup to more than one hundred million online viewers around the world, tuning in using all manner of connected devices such as smart phones and tablets? Akamai supported live streaming for all 64 matches for more than 50 rights-holding customers reaching over 80 countries, providing us with unique insight into online activity. By tracking the fraction of our network used to stream the matches in each geographic region and overlaying the associated energy consumption and carbon emissions, we were able to estimate the carbon footprint of the server and data center component of online viewing at a lean 100 metric tons CO2 equivalent.

Achieving these impressive results for such a long-term and broadly viewed event is a testament to Akamai’s commitment to reducing our operational impacts. As a result of our efforts to innovate around network productivity and efficiency, our absolute energy consumption and greenhouse gas emissions have decoupled from our network traffic growth, flattening even as our traffic continues to grow exponentially. Although, the digital story doesn’t end here.

WCCarbonFootprint 3.png

Figure 3. Plot of normalized absolute energy and GHG versus peak traffic from January Q1 2009 through Q2 2014. [Monthly data plotted here as quarterly average/sum.]

A recent study by researchers at Lawrence Berkeley National Laboratory and Northwestern University assessed that the server and data center portion of streaming represents only an astonishing 1% of the total energy and carbon footprint. The balance is attributed to end user last mile and devices, e.g., cable/DSL, modem, wireless router, tablet, computer and monitor, bringing the World Cup’s total digital footprint to 100,000 metric tons CO2e. If we compare just the World Cup attendee portion of the footprint , accounting for travel and accommodations, digital spectatorship is about twenty times more carbon-efficient than being there. And, you get the added benefit of the best seat in the house for every match!

The news is good all around. The Internet, with Akamai’s help, has broadened the accessibility of popular sporting events to people anywhere with Internet connectivity, on any device. Online viewing is much more carbon-efficient than attending in person. And with Akamai’s high-definition anywhere on-any-device streaming, you can enjoy players-in-your-living-room-quality coverage every game.

Upgrading To iPhone 6? Setting Up Your Rackspace Email Is A Snap.

September 9, 2014 11:21 am


The much-anticipated iPhone 6 and iPhone 6 Plus are here! The larger display sizes, better resolution, improved camera and video capture and new features such as Touch ID, Wi-Fi calling, Apple Pay and several more look pretty enticing (As does Apple Watch!). If you’re upgrading to iPhone 6 or iPhone 6 Plus, you can setup your Rackspace Hosted Exchange email in less than two minutes and in only a few clicks – and you don’t need to know any arbitrary passwords or server names. This is extremely helpful if you’re an admin of remote employees!

Here’s a brief look on how it’s done. You can check out more about iPhone integration with Rackspace Hosted Exchange here.


Rackspace::Solve New York: How Under Armour Solves For Rapid Ecommerce Growth

September 8, 2014 2:00 pm


By Brian McManus, Senior Director of Technology, Under Armour

When Under Armour started, we had just four web servers and a database server. Back then we were called KP Sports – named after our founder and CEO Kevin Plank.

We grew. And we grew quickly. We needed infrastructure – and a partner — that supported not only our ecommerce site’s high traffic (especially during seasonal spikes or a Super Bowl commercial) but also one that was always up and could grow right alongside us.

The Under Armour/Rackspace story is built on nearly a decade of growing together. When we signed on with Rackspace, we were a small startup making and shipping shirts. Rackspace was a scrappy hosting upstart. Together, we’ve grown in our respective industries and with each other. At Under Armour we’ve doubled-down with Rackspace because of our similar paths and cultures. We have parallel stories and our businesses are each built on doing the right thing for our customers.

Our journey from four servers to 200 is a testament to the foundation we’ve built on Rackspace. And we know Rackspace will be there for us to support our future growth and ensure we can continue to scale to meet increasing demands in the ultra-competitive ecommerce world.

Want to hear more about how Under Armour works with Rackspace to solve for high-demand ecommerce and growth? Brian McManus, senior director of technology at Under Armour will present at Rackspace::Solve New York, a one-day summit where you’ll hear directly from companies like Under Armour about how they’re solving tough challenges in their businesses. Rackspace::Solve New York is Thursday, September 18 at Cipriani Wall Street.

Register now for Rackspace::Solve New York.

And stay tuned for details of the next Rackspace::Solve event in Chicago.

Persons of Interest: Reaching TV’s Increasingly Fragmented Audience

September 4, 2014 9:48 am


This article was originally published on TechRadar Pro.
Between cord-cutting, DVR, and streaming video, it’s harder than ever for advertisers to reach their intended audiences. Not only is the number of cord-cutters rising, but so is the percentage of consumers who have never subscribed to Pay-TV services in their lives. And for the first time, more Americans subscribe to cable internet than cable TV.

Delivering high-quality video experiences across a range of devices has never been more necessary for brand marketers and publishers seeking to reach an increasingly diverse audience.

So how can these content creators reach a fragmented audience dispersed across a multitude of platforms and devices? Typically it has required complex and time-consuming processes for every smartphone, tablet, game console, etc. But advances in online video technology and platform infrastructure have opened the door to new delivery capabilities that produce high-quality, TV-like experiences across every device.

How to do it
To understand how you can create these incredible viewing experiences, it’s helpful to understand the background of video delivery and ad-insertion technologies. For years, a publisher’s only option was to build software primarily on a Flash player that would distribute to a PC. Marketers could inject ads into videos, but lags in communication between the PC and the ad server would result in the on-screen appearance of a spinning/loading wheel synonymous with waiting and delays.

Furthermore, once smartphones entered the market, the number of platforms that could run video proliferated – and not all of them supported Flash. (Steve Jobs’s insistence that iOS devices would never run Flash was the most prominent example.)

The industry came up with a new mechanism: instead of delivering content via Flash, engineers wrote native code that replicated all of Flash’s functionality for every device their brand wanted to reach. This had its limitations, however, as the building, testing, and maintenance of the software required huge amounts of resources.

The new tech
Finally, we have reached a point in the industry today where we can utilize content delivery technologies that combine user interface (UI) best practices in interactivity from software on mobile devices (known as “client-side” software) with the reliability, quality, and platform ubiquity of software from the “server-side,” or cloud-based applications.

Server-side ad-insertion technology allows advertisers to deliver video content and the ad in a single stream to a device. Advertisers love this cloud-based technology because it reaches any device with TV-like quality ads with the ability to bypass ad blockers.

There’s no software required on the device – as long as the device can play back video in the cloud, you can combine advertisement and content to reach every device. When combined with client-side software, which can block “skip” bars, or allow users to click on the video to go through to a promotional website, video content delivery is driving huge benefits for brands and publishers seeking to reach new and engaged audiences, while at the same time providing a terrific video experience for users.

Five years ago it would have been a daunting task to deliver a monetized television-like experience across all devices, but it’s easily achievable via server-side and client-side technologies. We can now deliver the right experience to the right user on every device in a cost-effective way. Reaching the cord-cutters means advertising on the online and mobile channels they use, and online video has never been a more effective tool.

Akamai Offers Further Guidance to Blunt Linux DDoS Threat

September 4, 2014 9:29 am


Yesterday’s advisory about attackers exploiting Linux vulnerabilities for DDoS assaults got a lot of attention. After hearing the feedback, we decided a follow-up post was necessary to help admins mount a better defense.

I spoke with David Fernandez, head of our Prolexic Security Engineering Research Team (PLXsert), and he offered additional details on the countermeasures.

First, for the basic details of the threat, check out yesterday’s post.

Now for the next steps…

Blocking this threat comes down to patching and hardening the server, keeping antivirus updated and establishing rate limits. Meantime, PLXsert has created a YARA rule and a bash command to detect and eliminate this threat from Linux servers.

It’s necessary to first harden the exposed web platform and services by applying patches and updates from the respective software vendors and developers:

There are also fundamental Linux server hardening procedures provided by SANS Institute (pdf).

The binary (ELF) will only run on Linux-based systems. However, attackers may be using other web exploits. The binary and the exploits used to break in are not co-dependent.

Antivirus detection

Several antivirus companies including McAfee have detections for this DDoS payload (McAfee identifies it as a generic Linux/DDosFlooder), however the detection rate among antivirus companies is relatively low overall for this threat. At the time of this advisory, VirusTotal reported only 23 out of 54 antivirus engines detecting this threat, which is an improvement from May 2014 when the detection rate was 2 out of 54 for this binary.

Rate limiting

Attackers will typically target a domain with these attacks, so a target web server will receive the SYN flood on port 80 or other port deemed critical for the server’s operation. The DNS flood will typically flood a domain’s DNS server with requests. Assuming the target infrastructure can support the high bandwidth observed by these attacks, rate limiting may be an option.

Akamai’s Generic Route Encapsulation (GRE) solution allows routing of an entire subnet(/24 minimum) for mitigation. The attack will be absorbed by Akamai’s solutions, allowing legitimate users to continue to use the site and its services.

YARA rule

YARA is an open source tool designed to identify and classify malware threats. It is typically used as a host-based detection mechanism and provides a strong PCRE engine to match identifying features of threats at a binary level or more. PLXsert utilizes YARA rules to classify threats that persist across many campaigns and over time. Here’s a YARA rule provided by PLXsert to identify the ELF IptabLes payload identified in this advisory:

rule IptablesELF



author = “PLXSert”

description = “Rule to detect ELF IpTable DDoS executable”


$elf = {7f 45 4c 46}

$st0 = “SynFloodSendThread”

$st1 = “DnsFloodSendThread”

$st2 = “SynFloodBuildThread”

$st3 = “DnsFloodBuildThread”

$st4 = “MAINPTH”

$code1 = “list.c”

$code2 = “main.c”

$code3 = “mypth.c”

$code4 = “Service.c”

$code5 = “srvnet.c”

$code6 = “ckbuf”

$code7 = “udptest.c”


($elf at 0 and all of ($st*) and 5 of ($code*) )


Bash commands

Two bash commands from PLXsert are designed to clean a system infected with the ELF IptabLes binary. After running these commands, system administrators are advised to reboot the system and run a thorough system inspection.

1. sudo find / -type f -name ‘.*ptabLe*’ -exec rm -f {} ‘;’

2. ps -axu | awk ‘/.IptabLe/ {print $2}’ | sudo xargs kill -9

We will be back with additional guidance as needed.

Bot Protection: How To Stop Web Bots In Their Tracks

September 3, 2014 12:00 pm


By Charlie Minesinger, Director of Sales, Distil Networks

After learning about the dangers of web bots and how they can hurt your website, your sales and your business as a whole, you’ll likely want to take every precaution possible to prevent an attack and remove bot traffic from your website. There are some steps you can take on your own like implementing CAPTCHAs on forms or blocking IP addresses, but you do not want to ruin the user experience and possibly block IP addresses of major consumer ISPs.

How to Choose a Bot Protection Solution

In order to ensure your site and business has the best protections available, it’s important to choose a solution that does not rely on IP addresses alone; provides real-time detection and mitigation (without adding even 10 milliseconds of latency); offers very high accuracy (at or above 99 percent); and learns and improves, constantly.

So, when evaluating bot protection solutions, you’ll want to look for these items:

  • Multiple detection technologies – A truly comprehensive bot prevention tool won’t just offer one or two layers of protection for your site, but will employ a wide range of technologies – javascripts, statistical methods, artificial intelligence (or support vector machine), user-agent validation, rate limits based on Unique ID, geographic analysis, and a network learning capability.
  • Constantly improving – The key to a great bot protection solution relies on R&D and network learning processes. Maintaining a shared database with a Unique ID for each bot, so bots can be detected immediately before any bot activity reaches your webservers. The best bot protection solutions are also constantly evolving and investing in R&D to maintain an edge in the “arms race” of website security.
  • Ability to target all kinds of bots – If you really want to protect your website, then you’ll need a solution that targets not just one type of bot, but all of them. An effective bot protection tool should protect against content theft and duplication, click fraud, traffic fraud, comment spam, server slowdowns, and any other attacks a bot could deliver.

You can find efficient, comprehensive solutions for blocking bots and protecting your website with Distil Networks. Distil’s protection service eliminates content theft, stops fraud bots, and alerts you to any and all potential bot attacks; in fact, Distil identifies 99.9 percent of bot page requests in real time. To learn more about Distil Networks, visit or contact the Distil team today.

Behind the scenes: How Content Delivery Networks leverage optimization technologies

August 13, 2014 10:42 am

How Content Delivery Networks leverage optimization technologies

Establishing a reliable web presence is the best way to maintain competitive advantage for your business. Large websites often rely on Content Delivery Networks (CDNs) as an effective way to scale to a larger and more geographically distributed audience. A CDN acts as a network of caching proxy servers, which transparently cache and deliver static content to end users.

There are many CDN providers to choose from, but what makes Internap unique is the combination of optimization technologies that we employ. Let’s take a behind-the-scenes look at how these technologies complement one another and work transparently to improve the user experience and accelerate performance.

You may be familiar with Managed Internet Route Optimizer™ (MIRO), Internap’s route optimization technology that forms the basis of our Performance IP™ product. MIRO constantly watches traffic flows, and performs active topology discovery and probing to determine the best possible route between networks. After our recent revamp of MIRO, some of our busiest markets are exceeding half a million route optimizations per hour, resulting in significantly lower latency and more consistent performance.

Our CDN also employs a proprietary TCP congestion avoidance algorithm, which evaluates and dynamically adjusts to network conditions. It ensures that short data transfers, such as HTML, Javascript libraries, style sheets and images occur as quickly as possible, while larger file downloads maintain consistent throughput.

Finally, the CDN’s geographic DNS routing system sends requests to the nearest available CDN POP based on service provisioning, geographic proximity, network and server load and available capacity.

All CDN transactions begin with DNS
When a client issues a DNS request to the CDN, DNS routing is handled by a routing methodology called anycast. Internap has a large deployment of CDN DNS servers around the globe, and with anycast, we use BGP to announce a prefix for our DNS servers in each of these locations. The client’s request gets routed to the nearest DNS server based on BGP hop count.

When a DNS request is received, MIRO observes this DNS activity and immediately begins probing and optimizing to find the best possible provider for that DNS traffic. The CDN DNS system evaluates the request and responds with the address of the nearest available CDN POP. The client then establishes a connection and sends a request to an edge cache server in that selected POP. Once again, MIRO observes the network traffic, and immediately begins probing and optimizing to find the best possible provider for the network traffic.

If the requested content is in the cache, then the cache server begins sending it. TCP acceleration takes over and begins optimizing the TCP connection, which ensures CDN content is delivered as quickly and smoothly as network conditions allow. If the requested content is not in the cache, then the cycle repeats itself, but between the CDN edge server and the origin of the content.

The best experience is achieved with a globally distributed origin that employs geographic DNS routing, such as Internap’s distributed CDN storage solution. With CDN storage, content can be replicated in up to 5 different locations across the globe, and the CDN DNS system routes the request to the nearest available location. MIRO also optimizes the experience between the edge and origin servers, both for the DNS request and the content retrieval. TCP acceleration ensures that the transfer happens with the lowest latency and highest throughput possible.

With the recent revamp of Internap’s HTTP content delivery platform, we’re continuing to maintain our commitment to performance. We have upgraded our cache servers to use SSDs instead of hard disks, and added some new performance-oriented features, such as the SPDY protocol. All of these new capabilities will further enhance the user experience and accelerate performance.

The post Behind the scenes: How Content Delivery Networks leverage optimization technologies appeared first on Internap.

Scaling Cloud Infrastructure Despite Brain Drain And Ad Hoc Processes

August 4, 2014 3:00 pm


By Matt Juszczak, Founder, Bitlancer

Your lean technology startup is gaining serious traction! Sales and marketing are steadily growing the user base. Meanwhile the operations team, armed with an assortment of scripts and ad hoc processes held together by library paste and true grit, is challenged to scale the cloud-based IT infrastructure to handle the increasing load. So you hire a software engineer with lots of cloud experience to build out a stable configuration management environment.

For a few months all is well, until that engineer abruptly splits. Now operations is left holding a bag of stale donuts. It turns out that nobody else on the team really knows how to use the configuration management tool. They only know how to spin up new virtual servers like they’d been shown. With each person already doing the work of many, nobody has the bandwidth to first learn the tool and then manage the various configurations left lying around.

You know what happens next, right? As development makes changes to the software, operations has to manually install the new virtual code and tweak the configurations manually every time they create a new server. Equally problematic, virtual server instances proliferate unchecked, leading to thousands of dollars in unexpected charges from the cloud service provider. This leads to a slash-and-burn server destruction scramble that accidentally takes down the production platform and disgruntles many of those new users. Heavy sighs emanate from the CTO’s office. What’s to be done?

This is where Bitlancer Strings rides in to save the day. Strings is truly easy to use and provides whatever level of visibility into the virtual infrastructure you desire. With a little support, most teams can migrate their cloud configuration management onto Strings quickly—some in as little as a week.

The benefits of adopting Strings in this scenario include:

  • Heaps of scarce developer time saved by the efficiency and ease of use of the Strings platform.
  • Near-instantaneous time-to-value and predictable, affordable monthly costs. No more surprise charges from the CSP.
  • Best of all, scalability concerns can be alleviated! Peace (whatever that means for a startup) is restored.

If your team is hampered by this kind of technical debt, Strings can help. Strings gives you everything you need to deploy, configure and manage your virtual servers, deploy applications, handle user authentication and SSH keys, and more.

Standards-based Strings works seamlessly with Rackspace services, as Bitlancer customers can attest. Strings also integrates with Rackspace Cloud Monitoring and the Rackspace Cloud Files backup service. Plus it can manage Rackspace Cloud Load Balancers with ease—all of which helps you get even more value from your Rackspace relationship.

To find out more about how Bitlancer Strings can quickly and cost-effectively turn your virtual configuration management pain into gain, visit us in the Rackspace Marketplace today.

This is a guest post written and contributed by Matt Juszczak, founder of Bitlancer, a Rackspace Marketplace partner. Bitlancer Strings is a cloud automation Platform-as-a-Service that radically simplifies the management of virtual servers and other cloud infrastructure, enabling startups and SMBs to preserve time, money and sanity in the cloud.

HTTP Live Streaming to iPhone / iPad / iPod: HTML5, iOS streaming media service

March 27, 2011 5:31 pm


  GravityLab Adds Full HTTP Live Streaming support For iOS and jwplayer Both Live and on-demand HLS streaming now available for iPhone, iPad, and iPod Touch   Eugene,...

Continue Reading

Flash Media Video Streaming: Introduction – Part 3

December 29, 2010 5:28 pm


Setting Up Video On-Demand, Embedding JW Player and Flowplayer To store (add) Flash media on our CDN to your account   Perform one of the following:   FTP...

Continue Reading