Grabbing a single frame from a video using ffmpeg

Sometimes it is useful to just grab a single frame in a video for either promotional or debugging purposes. This command grabs a single frame, key things to note are:

-ss specifies the start time, note that at 25 fps a frame is every 00:00:00.040 seconds
-t as per the above this specifies capturing a duration of 1 frame

ffmpeg -i /tmp/a4ba54bfc77e5f50eea219e4f0e1b51a.mp4 -ss 00:00:14.68 -t 00:00:00.04 -f image2 singleframe2.jpg

Overlaying bitrate on adaptive bitrate streams for testing

Sometimes it is very useful if you don’t have full player control to see what bitrate stream is playing. To do this I have included a sample below of encoding to different rates.

Here I have selected mp4 as the output as it is usually fairly straight forward to re-segment an mp4 as DASH, HLS, Smooth etc and I will include some further examples on how to do that.

Here is a link to a simple shell script for creating the basic overlays on mp4 outputs:

Here is an example of taking the mp4 outputs and then fragmenting them with Bento and then packaging up as DASH:

Here is an example of the result using DASH:

And if you need it here is a sample DASH player:

And here is the code for the :

# Created by Andrew Sinclair on 11/07/2014.
# Set this to a font file on your system
echo “Encoding $VIDSOURCE”
AUDIO_OPTS=”-c:a libfdk_aac -b:a 160000 -ac 2″
AUDIO_OPTS2=”-c:a libfdk_aac -b:a 640000 -ac 2″
# Change the preset for better quality e.g. to slow or medium, ultrafast is just for testing the output quickly
# TODO add options for keyframe intervals for best adaptive segmentation
VIDEO_OPTS1=”-c:v libx264 -vprofile main -preset ultrafast”
VIDEO_OPTS2=”-c:v libx264 -vprofile main -preset ultrafast”
VIDEO_OPTS3=”-c:v libx264 -vprofile main -preset ultrafast”
OUTPUT_HLS=”-f mp4″
~/Desktop/workspace/ffmpeg-mac/FFmpeg/ffmpeg -i $VIDSOURCE -y \
$AUDIO_OPTS -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION1 ${BITRATE1}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION1 $VIDEO_OPTS1 -b:v $BITRATE1 $OUTPUT_HLS ${OUTNAME}_${BITRATE1}.mp4 \
$AUDIO_OPTS -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION2 ${BITRATE2}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION2 $VIDEO_OPTS2 -b:v $BITRATE2 $OUTPUT_HLS ${OUTNAME}_${BITRATE2}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION3 ${BITRATE3}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION3 $VIDEO_OPTS3 -b:v $BITRATE3 $OUTPUT_HLS ${OUTNAME}_${BITRATE3}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION4 ${BITRATE4}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION4 $VIDEO_OPTS3 -b:v $BITRATE4 $OUTPUT_HLS ${OUTNAME}_${BITRATE4}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION5 ${BITRATE5}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION5 $VIDEO_OPTS3 -b:v $BITRATE5 $OUTPUT_HLS ${OUTNAME}_${BITRATE5}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION6 ${BITRATE6}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION6 $VIDEO_OPTS3 -b:v $BITRATE6 $OUTPUT_HLS ${OUTNAME}_${BITRATE6}.mp4 \
$AUDIO_OPTS2 -vf “drawtext=fontfile=’${FONTFILE}’:text=’$RESOLUTION7 ${BITRATE7}bps’:fontsize=${FONTSIZE}:fontcolor=${FONTCOLOR}:x=100:y=100:box=1″ -s $RESOLUTION7 $VIDEO_OPTS3 -b:v $BITRATE7 $OUTPUT_HLS ${OUTNAME}_${BITRATE7}.mp4 \

Generating encodes and SMIL files for Akamai HD, Wowza etc

Here is a quick script I generated for encoding files and generating SMIL files for use with Akamai HD so they can be segmented on the fly as HDS or HLS.

Formatted etails here:



echo “Encoding $VIDSOURCE”

AUDIO_OPTS=”-c:a libfaac -b:a 160000 -ac 2″
AUDIO_OPTS2=”-c:a libfaac -b:a 640000 -ac 2″
VIDEO_OPTS1=”-c:v libx264 -vprofile main -preset slow”
VIDEO_OPTS2=”-c:v libx264 -vprofile main -preset slow”
VIDEO_OPTS3=”-c:v libx264 -vprofile main -preset slow”
OUTPUT_HLS=”-f mp4″

~/Desktop/workspace/ffmpeg-mac/FFmpeg/ffmpeg -i $VIDSOURCE -y \


echo $MASTER > “$OUTNAME.smil”


CBR vs VBR in adaptive streaming

An interesting debate arose recently about whether CBR or VBR should be used when encoding for adaptive streaming. The debate started with my comment that adaptive streaming should use CBR as it better allows the client to manage what bandwidth it is receiving, the issue being VBR that if the client is receiving what it thinks is a 3Mbps stream that due to scene complexity then spikes say up to 6Mbps to deal with the extra complexity this would cause the clients buffers to fill a lot slower than expected for the bitrate and then down shift.

There are a few factors at play here in that need to be considered, these are:

  1. Real world bandwidth in a consumer environment available to a device can vary quite a lot
  2. Video encoding can demand varying amounts of bits to represent an image at a constant quality
  3. Encoding at a constant bit rate may produce video overhead from “stuffing” bits that unneccesarily consume storage and bandwidth

In relation to item 1. if we take a normal home environment not only the providers upstream available bandwidth may vary due to congestion but other in home factors come in to play such as competition for limited bandwidth from multiple downloaded to variations in single strength over wifi.

In relation to item 2. the amount of bits to encode 2 seconds of black vs a high action CGI scene with a lot of colours or rippling water is significant.

This post is a work in progress but if anyone is interested leave me a note and I will follow up.

Note that the Apple encoding recommendations for HLS which are widely cited refer to a maximum VBR rate of 10% over the target rate.

Android Java based adaptive streaming client
Adaptive Video Streaming over HTTP with Dynamic Resource Estimation

Stabilising / deshaking GoPro (or other) videos with ffmpeg and libvidstab

If anyone has shot on GoPro, particularly the helmet mounts you would be very familiar with the issue of camera shake. Users of YouTube may have also noticed that they have quite a nice stablisation filter.

Now thanks to Georg Martius you there is a great filter that you can include in your ffmpeg workflow for stablising video and it does quite a decent job

1. First download the source

git clone
cd vid.stab
cmake .
make install

2. configure you ffmpeg with –enable-libvidstab

./configure --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-libvidstab
make install

3. Using libvidstab is a 2 pass process as first you need to detect the stabilisation issues, for example using default settings. Note on the first pass you don’t really need an output file as the real data get’s written to a special file

ffmpeg -i myvideo.mp4 -vf vidstabdetect output.mp4
ffmpeg -i myvideo.mp4 -vf vidstabtransform myvideo_stabilisted.mp4

There are of course a lot more options you can use and see those here:



Building ffmpeg with libx265 for h265/hevc encoding

ffmpeg now has x265 support and while it is still early days for the codec this is great news as there are also a number of players out there now too, not to mention many devices now have the CPU to play back the codec. Note that quite a lot of services now support h265/hevc input and as it is such a small footprint it can make quite a good file for transfer to cloud encoding.

Setup on OS X (likely be similar for linux and will do that at some point)

  1. Make sure you have cmake e.g. >brew install cmake and you will also need yasm
  2. You will also need mercurial to clone x265 >brew install mercurial
  3. Also if you already have ffmpeg installed using something like brew then uninstall that first >brew uninstall ffmpeg

Anyhow here are the simple steps:

1. Make sure you have a current build of ffmpeg checked out of git along with any other libs you are using e.g. libx264

2. Download the libx265 repository and build as per the instructions here: (note I assume no one still uses Windows for dev!)

hg clone
cd x265/build/linux
make install

3. On your ffmpeg configure it with –enable-libx265 (it is disabled by default). Sample from my configure below

./configure --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-libvidstab --enable-libx265
make install

4. You should now be ready to go, e.g.

ffmpeg -i myvideo.MP4 -c:v libx265 encodetest/myvideo.mkv

Update: now with mp4 support

ffmpeg -i anchorman2-trailer-ffmpeg.mp4 -y -s 640x360 -c:v libx265 -c:a libfdk_aac -profile:a aac_he -b:v 200k -b:a 32k anchorman2_640x360_x265.mp4

In some cases you may get an error like:

Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

And you will see above it:

x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
x265 [error]: Sample Aspect Ratio width must be greater than 0

There is a bug at present (2014-04-08) that requires the SAR to be set in the header, note that you can fix this by doing a pre-encode and write the SAR / DAR header by forcing the aspect -aspect 16:9 e.g.

ffmpeg -i anchorman2-trailer.mp4 -y -c:v libx264 -c:a copy -aspect 16:9 -crf 0 anchorman2-trailer-ffmpeg.mp4

Some things to note:

  • Moderately slow! It is now much faster than it used to be and I am getting 17fps without tweaking which is pretty good
  • Playback of a 1080P HD clip used 400% on my i7 based laptop, size was
  • Compared to a 2000kbps x264 encode of the same file quality was very good!


Using ffmpeg with Akamai HD

Quite often it is useful to put up a test stream or source with a new CDN configuration for testing purposes, this works with authenticated rtmp which is required when connected to Akamai or many Adobe/Flash Media Server servers.

ffmpeg -re -f lavfi -i testsrc=size=1920x1080 -c:v libx264 -b:v 500k -an -s 1920x1080 -x264opts keyint=50 -g 25 -pix_fmt yuv420p -f flv rtmp://<USERNAME>:<PASSWORD>@p.<CPCODE><STREAMID>

You need to get the username, password, entrypoint name and streamid from your Akamai account (Configure -> Live Media)

You can then play it back using the URLs you also see in your Akamai control panel.

Note that for the RTMP to be converted to HLS or HDS you need to make sure you have frequent enough keyframes which is what the keyint directive is for.

Building ffmpeg with librtmp

librtmp is one option for passing additional parameters through to Akamai however you can also just use HTTP style authentication with rtmp://username:password@entrypoint/stream to connect to an Adobe Media Server

It is available in librtmp which can be included in ffmpeg.

git clone

For Mac OSX we set the target to darwin for linux use posix or just leave SYS= out as posix is the default

make SYS-darwin

Output should look as follows:

gcc -dynamiclib -twolevel_namespace -undefined dynamic_lookup -fno-common -headerpad_max_install_names -install_name /usr/local/lib/librtmp.0.dylib -o librtmp.0.dylib rtmp.o log.o amf.o hashswf.o parseurl.o -lssl -lcrypto -lz 
ln -sf librtmp.0.dylib librtmp.dylib
gcc -Wall -DRTMPDUMP_VERSION=\"v2.4\" -O2 -c -o rtmpdump.o rtmpdump.c
gcc -Wall -o rtmpdump rtmpdump.o -Llibrtmp -lrtmp -lssl -lcrypto -lz 
gcc -Wall -DRTMPDUMP_VERSION=\"v2.4\" -O2 -c -o rtmpgw.o rtmpgw.c
gcc -Wall -DRTMPDUMP_VERSION=\"v2.4\" -O2 -c -o thread.o thread.c
gcc -Wall -o rtmpgw rtmpgw.o thread.o -lpthread -Llibrtmp -lrtmp -lssl -lcrypto -lz 
gcc -Wall -DRTMPDUMP_VERSION=\"v2.4\" -O2 -c -o rtmpsrv.o rtmpsrv.c
gcc -Wall -o rtmpsrv rtmpsrv.o thread.o -lpthread -Llibrtmp -lrtmp -lssl -lcrypto -lz 
gcc -Wall -DRTMPDUMP_VERSION=\"v2.4\" -O2 -c -o rtmpsuck.o rtmpsuck.c
gcc -Wall -o rtmpsuck rtmpsuck.o thread.o -lpthread -Llibrtmp -lrtmp -lssl -lcrypto -lz

Then install (note for OSX you need to specify darwin)

sudo make SYS=darwin install
mkdir -p /usr/local/bin /usr/local/sbin /usr/local/man/man1 /usr/local/man/man8
cp rtmpdump /usr/local/bin
cp rtmpgw rtmpsrv rtmpsuck /usr/local/sbin
cp rtmpdump.1 /usr/local/man/man1
cp rtmpgw.8 /usr/local/man/man8
sed -e "s;@prefix@;/usr/local;" -e "s;@libdir@;/usr/local/lib;" \
 -e "s;@VERSION@;v2.4;" \
 -e "s;@CRYPTO_REQ@;libssl,libcrypto;" \
 -e "s;@PRIVATE_LIBS@;;" > librtmp.pc
mkdir -p /usr/local/include/librtmp /usr/local/lib/pkgconfig /usr/local/man/man3 /usr/local/lib
cp amf.h http.h log.h rtmp.h /usr/local/include/librtmp
cp librtmp.a /usr/local/lib
cp librtmp.pc /usr/local/lib/pkgconfig
cp librtmp.3 /usr/local/man/man3
cp librtmp.0.dylib /usr/local/lib
cd /usr/local/lib; ln -sf librtmp.0.dylib librtmp.dylib

On trying a test build of ffmpeg with the new library (./configure –enable-librtmp) it appeared my PKG_CONFIG_PATH was correct so I updated this:

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

then searched for the package and all was OK

pkg-config --libs librtmp

Now you can build ffmpeg with the new libraries, I tend to just run my local ./ffmpeg to get the other configure settings I last used, this now is:

./configure --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libaacplus --enable-libcelt --enable-libfaac --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-openssl --enable-libopus --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-libvidstab --prefix=/usr/local --enable-librtmp

Now for some reason my ffmpeg build started failing on linking x264 which it hadn’t done before so I had to add –cc=clang to change the compiler. Maybe one of the recent xcode updates so will check.

LD ffmpeg_g
Undefined symbols for architecture x86_64:
 "_x264_encoder_open_129", referenced from:
 _X264_init in libavcodec.a(libx264.o)
ld: symbol(s) not found for architecture x86_64
collect2: ld returned 1 exit status
make: *** [ffmpeg_g] Error 1

With clang in place this built but I still get an error on connecting to Akamai:

ffmpeg -i udp:// -s 512x288 -aspect 16:9 -profile baseline -b 500k -vcodec libx264 -acodec libmp3lame -ar 44100 -ab 64k -ac 2 -deinterlace -coder 0 -f flv 'rtmp:// flashver=FMLE/3.0\20(compatible;\20FMSc/1.0) live=true pubUser='User' pubPasswd='Password' playpath=live_chan1_999@12345'



Anatomy of a successful connection (using Flash Live Media Encoder)


nonprivate..flashVer…FMLE/3.0 (compatible; FMSc/1.0)..swfUrl…rtmp://


_result.?……….fmsVer…FMS/4,5,5,4013..capabilities.@o……..mode.?………….level…status..code…NetConnection.Connect.Success..description…Connection succeeded…objectEncoding…………….version..



Recording a live HLS stream to a file for a specific time period

This is an example of how to record a live HLS stream to a file for a specific time period. Useful for DVR like functions of live adapative streams (noting that at the time of writing multi-rate adapative for ffmpeg isn’t great). I have selected ts as the output as this is the minimum overheard though you could easily go to MP4 of equivalent.

This records the live stream to a file for 30 seconds:

ffmpeg -re -i -ss 00:00:00.0 -t 00:00:30.0 -c:v copy -c:a copy test_record.ts -y

Live encoding with ffmpeg

ffmpeg is without doubt one of the (if not the!) best file based encoders out there. However getting it to run as a 24×7 live encoder can be somewhat tricky as one of the main issue with ffmpeg is there are few options to handle and retry any failure conditions, which is perfectly acceptable for a file based encoder.

To accommodate this scenario I have been working on some simple scripts that can wrap ffmpeg to produce 24×7 live streams.

This is a work in progress and my wrapper code is available here: