How can I deliver my video to all major video platforms without a lot of complex encoding? Gravitylab deploys a single video player that reaches iOS, Android, and Blackberry

Streaming video encoding can be challenging. Why get bogged down in video encoding quicksand?

We’re posting some streaming video hosting and encoding insights because video content publishers are seeing issues with mobile and live video streaming, and you may too. We’re looking to share our viewpoints and if possible, start a discussion regarding mobile screens, including tablets, laptop and desktop video services, and taking streaming video hosting to a wide ramnge of devices around the globe so you don’t need to worry about it.

iOS, Android, touchscreens, tablets, smartphones and browsers are all here to stay. Streaming video encoding that supports mobile video playback is creating numerous encoding and video delivery challenges for streaming video publishers. The video encoding questions we see frequently are:

What are the best encoding parameters and streaming video formats for iOS, Android, Blackberry and Rim, tablets including the iPad and Windows 8 Surface, and other screens?

How can I deliver the video automatically, behind the scenes without my audience having to choose a format?

What do we need to know about adaptive bit rate (and should be

How much encoding capacity is needed to turn around X number of files in
Y (amount of time)?

Is the cloud a good use case for encoding and distribution of video?

Unfortunately, there is no single video encoding answer to any of these
questions. Encoding is often described as a black-art, best left to
wizards with calibrated eyes and mastery of preserving image quality
while squeezing bits out of video. The reality is that you don’t need to
have in-depth knowledge – the right tools (in this case, from Sorenson)
make all the difference.

So how do you determine which encoding parameters will deliver the best
video quality?

Factors such as who your streaming video users are, where they are located, what
mobile video devices are being used, and what streaming media bandwidth is typically available.There is often more
than one video hosting profile used to reach all viewers and devices adequately so the
end result is that more video is being encoded than ever before,
especially when adaptive bit rate formats are selected. it’s not uncommon
for us to spend time consulting with our customers about desired target
devices and defining/selecting the appropriate presets within our
products. Remember, our presets are more than the audio & video details,
they also contain the pre-processing filters, publishing information, and
notifications (for review & approval).

Why adaptive bit rate (ABR) encoding?

The simple, short answer is that adaptive bit rate encoding helps provide
the best playback experience for your audience. There are multiple
adaptive bit rate formats but they all work more or less the same. A
source file is segmented into smaller chunks which are encoded at various
resolutions. As part of the encoding process, a manifest file is created
so that when a user begins playing back the file, the manifest references
all of those small chunks at the various resolutions and can play back
the appropriate one based upon the user’s available bandwidth. While the
vast majority of our customers have implemented Apple’s adaptive bit rate
format (HLS), there is growing momentum to support MPEG DASH, an
international standard for adaptive bit rate encoding that provides
greater flexibilty than the existing proprietery formats. There is quite
a bit of confusion around ABR and the various formats – a lot of details
to consider so happy to discuss with you and make sure you understand the
differences between each.

Determining encoding file storage requirements

The challenge to generating adaptive bit rate encodes and supporting a
broad range of users and devices is that it increases the amount of
encoding hardware & software required which can lead to capacity
bottlenecks. Of course, no one wants to invest in more hardware/software
than is really necessary. It’s important to consider desired turn-around
times for processing of content (from source to output format) and
determine capacity requirements accordingly. As an example, 100 source
files that need to be turned around in a hour will require more encoders
than if those 100 files can be processed in 24 hours. Similarly, if you
need to create multiple output files per source file and do it quickly,
you’ll need to invest in more encoders to process content simultaenously
– the cloud for encoding can be a good fit to support you here.

The cloud for video encoding

In case you missed our recent announcement, our new global platform was
released last year and I encourage you to take a look if you haven’t
yet. We’ve designed our content delivery network fiber connected datacenters servers to use idle processors to run as a cloud encoder on your entire global video presence, a cloud-based encoder on AWS (and other public or private cloud
infrastructure), or both. The ability to deploy our software locally for
consistent encoding needs and scale out into a private or public cloud
with ease is something that uniquely distinguishes our server product
from other solutions. We don’t restrict our software by processor cores
or other hardware limitations so you can maximize your encoding
throughput per license. Additionally we provide flexible licensing
options so you can pay us per server – one time, monthly, or a
combination of both. The benefit of the cloud is the ability to scale up
encoders on demand so that capacity bottlenecks never occur. Once those
encoders are no longer needed, scale them back down – pay for only what
you need, when you need it.

There has been a tendency among customers to think the cloud (for
encoding purposes) is complicated and difficult to configure but it
really isn’t the case. I can walk through installation and configuration
of a multi-server encoding farm with various processing queue’s in a
matter of minutes.

We’ve been focused on working to solve the above challenges by developing
integrated and flexible software solutions. We’ve always tried to support
the broadest range of source and output formats and we’ve ensured this
continues across both our desktop and server products. One of the
difficulties of video encoding is that there are many, many formats for
the capture of video. We are often complimented on our ability to decode
source files where others fail and we continue to beef-up our
capabilities here.

Speed is always an issue when it comes to encoding and our desktop and
server products share significant improvements that we’ve made to the
core compression engine. Source content is segmented across all processor
cores and we utilize 100% of CPU to work on segments in parallel and
speed-up the encoding of files. In most cases it is dramatically faster
than our previous compression engine and worth evaluating – especially if
you have or currently use an older version of Squeeze desktop or server.

The integration of our desktop, server, and cloud solutions is innovative
as well -and easy to setup. Whether you purchase our desktop or server
products, we include a free Sorenson 360 account which is perfect for
cloud storage of your video assets and for use as a review and approval
solution. We automate the process of encoding locally (or in the cloud),
wrap the video file in the appropriate player and configure for playback
across a wide range of devices, and establish an approval workflow
without the need to FTP, burn a disc, or send around a USB.

As I mentioned at the begining of this post, we have a long list of
streaming video clients with encoding questions and facing issues that you
may have too – I hope this information has been helpful. if I can answer
any questions for you or assist in any way, please let me know.