Simple split and stitch encoding with ffmpeg

Here is a very simplified example of split and stitch encoding with ffmpeg. Such as setup could be used for spreading encoding across a cluster for parallel encoding or large files or just really fast encoding. It has some limitations in that it needs more than optimal keyframes for best size/quality combinations but on the plus size it would be compatible with segmented delivery of files. Note that the MPEG Transport Stream format has been used as it is the most compatible for stitching back together.

Next I will do some more investigation into the GOP structure generated by ffmpeg in this scenario.

In the first section a source file is broken into 3 x 30s parts (note that this is not the whole clip and is just for demo purpose) and is transcoded into a h264 (libx264) transport stream.

The second step is the stitching back together via a simple concat filter, as the files are a transport stream and encoded with the same settings this works well.

ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part1.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:30.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part2.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:01:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part3.ts
ffmpeg -y -i concat:part1.ts\|part2.ts\|part3.ts -c copy concat.ts

Options for HDS Packaging

Here are some of the options that I am aware of for packaging content as Adobe HDS. They are all commercial software. 1. Adobe Media Server and the f4fpackager tool 2. Wowza Media Server 3. Unifed Streaming Server 4. Nginx HDS module The specification for the manifest format from Adobe is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-media-manifest-specification.pdf And the specification for HDS fragments and complete setup is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdf Other information: PHP Script that can join f4f/f4m: https://github.com/K-S-V/Scripts Note that it also appears there is a ts2hds function in gpac that requires further investigation as it doesn’t appear to be built by default. https://github.com/maki-rxrz/gpac

Creating a mosaic from a video and extracting frames for scene changes

This is a very cool feature buried down in the ffmpeg documentation that let’s you generate a very nice mosaic of pictures from a video based on scenecuts.

Commands:

ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png

Sample result below

preview

You can also use this to output an individual frame for every video, example follows:

ffmpeg -i ../source/dig_720p.mp4 -vf select='gt(scene\,0.6)' -vsync vfr preview%04d.png

The results of this could be used for a preview track of the video as per the below:

preview0018 preview0019 preview0020 preview0016 preview0017 preview0014 preview0015 preview0011 preview0012preview0016 preview0001 preview0002 preview0003 preview0004 preview0005 preview0006 preview0007 preview0008 preview0009 preview0010 preview0011 preview0012

 

Updated HLS encoding and packaging commands for ffmpeg

Here are some updated commands with latest build of ffmpeg for encoding and packaging a file to HLS. Note that this example only covers one bitrate a present and my previous posts still apply for multi bitrate manifest creation.

Step 1: Create a TS mezzanine file (very useful for packaging to multiple formats)

ffmpeg -i ../source/redrock_720p.mp4 -s 1280x720 -c:v libx264 -c:a libfdk_aac -ar 44100 -bsf h264_mp4toannexb -force_key_frames 'expr:gte(t,n_forced*2)' -y -f mpegts redrock_mez_720p.ts

Step 2: Package as HLS

ffmpeg -i redrock_mez_720p.ts -c copy -map 0 -segment_list index_1400.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type hls -f segment segment-%03d.ts -y

This creates a HLS file with 10 second segments.

Sample output of encoded HLS is here: http://bucket01.mscreentv.com.s3.amazonaws.com/videos/redrock720p/index_1400.m3u8

Grabbing a single frame from a video using ffmpeg

Sometimes it is useful to just grab a single frame in a video for either promotional or debugging purposes. This command grabs a single frame, key things to note are:

-ss specifies the start time, note that at 25 fps a frame is every 00:00:00.040 seconds
-t as per the above this specifies capturing a duration of 1 frame

ffmpeg -i /tmp/a4ba54bfc77e5f50eea219e4f0e1b51a.mp4 -ss 00:00:14.68 -t 00:00:00.04 -f image2 singleframe2.jpg

Overlaying bitrate on adaptive bitrate streams for testing

Sometimes it is very useful if you don’t have full player control to see what bitrate stream is playing. To do this I have included a sample below of encoding to different rates.

Here I have selected mp4 as the output as it is usually fairly straight forward to re-segment an mp4 as DASH, HLS, Smooth etc and I will include some further examples on how to do that.

Here is a link to a simple shell script for creating the basic overlays on mp4 outputs: https://gist.github.com/sinkers/c4d39960018bca3540d4

Here is an example of taking the mp4 outputs and then fragmenting them with Bento and then packaging up as DASH: https://gist.github.com/sinkers/5cc2854e01a05a2db650

Here is an example of the result using DASH: http://mi9stuff.s3.amazonaws.com/overlay_dash2/manifest.mpd

And if you need it here is a sample DASH player: http://mi9stuff.s3.amazonaws.com/dash.js/

And here is the code for the :

#!/bin/sh
# encode_bitrate_overlay.sh
#
#
# Created by Andrew Sinclair on 11/07/2014.
#
#!/bin/bash
VIDSOURCE=$1
OUTNAME=$2
RESOLUTION1=”320×180″
RESOLUTION2=”512×288″
RESOLUTION3=”640×360″
RESOLUTION4=”960×540″
RESOLUTION5=”1024×576″
RESOLUTION6=”1280×720″
RESOLUTION7=”1920×1080″
BITRATE1=”400000″
BITRATE2=”800000″
BITRATE3=”1000000″
BITRATE4=”1200000″
BITRATE5=”1400000″
BITRATE6=”2000000″
BITRATE7=”4000000″
# Set this to a font file on your system
FONTFILE=”/opt/X11/share/fonts/TTF/Vera.ttf”
FONTSIZE=”40″
FONTCOLOR=”black”
echo “Encoding $VIDSOURCE”
AUDIO_OPTS=”-c:a libfdk_aac -b:a 160000 -ac 2″
AUDIO_OPTS2=”-c:a libfdk_aac -b:a 640000 -ac 2″
# Change the preset for better quality e.g. to slow or medium, ultrafast is just for testing the output quickly
# TODO add options for keyframe intervals for best adaptive segmentation
VIDEO_OPTS1=”-c:v libx264 -vprofile main -preset ultrafast”
VIDEO_OPTS2=”-c:v libx264 -vprofile main -preset ultrafast”
VIDEO_OPTS3=”-c:v libx264 -vprofile main -preset ultrafast”
OUTPUT_HLS=”-f mp4″
~/Desktop/workspace/ffmpeg-mac/FFmpeg/ffmpeg -i $VIDSOURCE -y \
$AUDIO_OPTS -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION1 ${BITRATE1}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION1 $VIDEO_OPTS1 -b:v $BITRATE1 $OUTPUT_HLS ${OUTNAME}_${BITRATE1}.mp4 \
$AUDIO_OPTS -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION2 ${BITRATE2}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION2 $VIDEO_OPTS2 -b:v $BITRATE2 $OUTPUT_HLS ${OUTNAME}_${BITRATE2}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION3 ${BITRATE3}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION3 $VIDEO_OPTS3 -b:v $BITRATE3 $OUTPUT_HLS ${OUTNAME}_${BITRATE3}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION4 ${BITRATE4}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION4 $VIDEO_OPTS3 -b:v $BITRATE4 $OUTPUT_HLS ${OUTNAME}_${BITRATE4}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION5 ${BITRATE5}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION5 $VIDEO_OPTS3 -b:v $BITRATE5 $OUTPUT_HLS ${OUTNAME}_${BITRATE5}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION6 ${BITRATE6}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION6 $VIDEO_OPTS3 -b:v $BITRATE6 $OUTPUT_HLS ${OUTNAME}_${BITRATE6}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION7 ${BITRATE7}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION7 $VIDEO_OPTS3 -b:v $BITRATE7 $OUTPUT_HLS ${OUTNAME}_${BITRATE7}.mp4 \

Generating encodes and SMIL files for Akamai HD, Wowza etc

Here is a quick script I generated for encoding files and generating SMIL files for use with Akamai HD so they can be segmented on the fly as HDS or HLS.

Formatted etails here: https://gist.github.com/sinkers/148a39f8d926a443501a

or


#!/bin/bash
VIDSOURCE=$1
OUTNAME=$2
RESOLUTION1="320x180"
RESOLUTION2="512x288"
RESOLUTION3="640x360"
RESOLUTION4="960x540"
RESOLUTION5="1024x576"
RESOLUTION6="1280x720"
RESOLUTION7="1920x1080"
BITRATE1="400000"
BITRATE2="800000"
BITRATE3="1000000"
BITRATE4="1200000"
BITRATE5="1400000"
BITRATE6="2000000"
BITRATE7="4000000"

echo “Encoding $VIDSOURCE”

AUDIO_OPTS=”-c:a libfaac -b:a 160000 -ac 2″
AUDIO_OPTS2=”-c:a libfaac -b:a 640000 -ac 2″
VIDEO_OPTS1=”-c:v libx264 -vprofile main -preset slow”
VIDEO_OPTS2=”-c:v libx264 -vprofile main -preset slow”
VIDEO_OPTS3=”-c:v libx264 -vprofile main -preset slow”
OUTPUT_HLS=”-f mp4″

~/Desktop/workspace/ffmpeg-mac/FFmpeg/ffmpeg -i $VIDSOURCE -y \
$AUDIO_OPTS -s $RESOLUTION1 $VIDEO_OPTS1 -b:v $BITRATE1 $OUTPUT_HLS ${OUTNAME}_${BITRATE1}.mp4 \
$AUDIO_OPTS -s $RESOLUTION2 $VIDEO_OPTS2 -b:v $BITRATE2 $OUTPUT_HLS ${OUTNAME}_${BITRATE2}.mp4 \
$AUDIO_OPTS2 -s $RESOLUTION3 $VIDEO_OPTS3 -b:v $BITRATE3 $OUTPUT_HLS ${OUTNAME}_${BITRATE3}.mp4 \
$AUDIO_OPTS2 -s $RESOLUTION4 $VIDEO_OPTS3 -b:v $BITRATE4 $OUTPUT_HLS ${OUTNAME}_${BITRATE4}.mp4 \
$AUDIO_OPTS2 -s $RESOLUTION5 $VIDEO_OPTS3 -b:v $BITRATE5 $OUTPUT_HLS ${OUTNAME}_${BITRATE5}.mp4 \
$AUDIO_OPTS2 -s $RESOLUTION6 $VIDEO_OPTS3 -b:v $BITRATE6 $OUTPUT_HLS ${OUTNAME}_${BITRATE6}.mp4 \
$AUDIO_OPTS2 -s $RESOLUTION7 $VIDEO_OPTS3 -b:v $BITRATE7 $OUTPUT_HLS ${OUTNAME}_${BITRATE7}.mp4 \

MASTER=” \
\\
\
\
\
\
\
\
\
\
\
\
\
\

echo $MASTER > “$OUTNAME.smil”

 

CBR vs VBR in adaptive streaming

An interesting debate arose recently about whether CBR or VBR should be used when encoding for adaptive streaming. The debate started with my comment that adaptive streaming should use CBR as it better allows the client to manage what bandwidth it is receiving, the issue being VBR that if the client is receiving what it thinks is a 3Mbps stream that due to scene complexity then spikes say up to 6Mbps to deal with the extra complexity this would cause the clients buffers to fill a lot slower than expected for the bitrate and then down shift.

There are a few factors at play here in that need to be considered, these are:

  1. Real world bandwidth in a consumer environment available to a device can vary quite a lot
  2. Video encoding can demand varying amounts of bits to represent an image at a constant quality
  3. Encoding at a constant bit rate may produce video overhead from “stuffing” bits that unneccesarily consume storage and bandwidth

In relation to item 1. if we take a normal home environment not only the providers upstream available bandwidth may vary due to congestion but other in home factors come in to play such as competition for limited bandwidth from multiple downloaded to variations in single strength over wifi.

In relation to item 2. the amount of bits to encode 2 seconds of black vs a high action CGI scene with a lot of colours or rippling water is significant.

This post is a work in progress but if anyone is interested leave me a note and I will follow up.

Note that the Apple encoding recommendations for HLS which are widely cited refer to a maximum VBR rate of 10% over the target rate.

References:
Android Java based adaptive streaming client
Adaptive Video Streaming over HTTP with Dynamic Resource Estimation

Setting up an Android emulator for testing TV apps

This is a pretty basic overview on how to setup the Android TV emulator to sort of work like a TV device if you are testing a UI.

First we are going to assume that you want to emulate basic controls like on a simple remote e.g. with up/down/left/right, back and menu. Emulating a more advanced remote will be the subject of a more detailed post.

First you are going to need Android Developer Tools installed. And then you need to open the Android Virtual Device Manager.

Steps to achieve this setup are as per the following video:

Essentially what we are doing here is:

  1. Creating an essentially blank AVD based on a 4.2.2 operating system image
  2. Setting it so the hardware dpad is enabled, as this is what we will use for basic remote emulation
  3. Setting it so that the hardware keys (menu, back) are enabled

Details of my config.ini are here:

avd.ini.encoding=ISO-8859-1
abi.type=armeabi-v7a
disk.dataPartition.size=200M
hw.accelerometer=yes
hw.audioInput=yes
hw.battery=yes
hw.camera.back=none
hw.camera.front=none
hw.cpu.arch=arm
hw.cpu.model=cortex-a8
hw.dPad=yes
hw.device.hash2=MD5:6930e145748b87e87d3f40cabd140a41
hw.device.manufacturer=Generic
hw.device.name=4.65in 720p (Galaxy Nexus)
hw.gps=yes
hw.keyboard=yes
hw.lcd.density=320
hw.mainKeys=yes
hw.ramSize=1024
hw.sdCard=no
hw.sensors.orientation=yes
hw.sensors.proximity=yes
hw.trackBall=no
image.sysdir.1=system-images/android-14/armeabi-v7a/
skin.dynamic=yes
skin.name=1280x720
skin.path=1280x720
tag.display=Default
tag.id=default
vm.heapSize=64

Accessing a network service on the device can be done with a bit of extra setup. The following example shows how we would access a service running over http on the emulator on port 6999

  1. Telnet to the port of the device, e.g. telnet localhost 5554
  2. Type: redir add tcp:5010:6999
  3. You can now access http://localhost:5010/remote the same as if it was running on the device that was available on the local network