Showing the pts of a frame in the middle of the video

With ffmpeg here is a simple way to clearly show the PTS of the frame playing on the video. Note that the location of the font file will vary from system to system and this is on Mac OS X.

ffmpeg -i oceans.mp4 -vf "drawtext=fontfile=/Library/Fonts/Tahoma.ttf:text='%{pts}': fontcolor=white: fontsize=50: x=(w-text_w)/2:y=(h-text_h)/2: box=1: boxcolor=black@0.2" -s 1920x1080 -c:v libx264 -preset ultrafast oceans-timestamp.mp4 -y

Setting up a simple HLS live recording with DVR player

This is a very simplistic example of how to setup a recording of a live stream that is then saved as HLS for playback and rewind. The aim is that you could use this for rough cut editing of clips off a live stream.

What you will need:

  • Server or desktop with latest build of ffmpeg
  • HTTP server
  • Decent computer and basic HTML skills

1. Record a live stream as HLS

ffmpeg -i http://www.nasa.gov/multimedia/nasatv/NTV-Public-IPS.m3u8 -c copy -f segment -segment_list index.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type m3u8 segment%05d.ts

This sample uses the NASA live stream (no guarantee that will be there) but you should be able to take in just about any live stream. Note that I am just copying the codec out here, if you want to transcode look at some of my other HLS posts.

This also assumes that you are running the command so that it writes directly into a directory on your webserver e.g. in my setup I could access the playlist at:

http://localhost/temp/recordtest/index.m3u8

2. Play the stream back with DVR functions

The best player I found for this was here: http://osmfhls.kutu.ru/

For testing you should just be able to take the stream above and paste it into the test URL, making sure you click on the DVR function first.

 

S3 write performance with yas3fs – 100fps

While S3 is no match for EBS SSD’s it is quite surprising what kind of performance you can get out of it when used as a standard filesystem.

Using yas3fs https://github.com/danilop/yas3fs to join together some video files I am getting a solid 100fps with a 10-15Mbps source file which is quite usable for general encoding workloads.

Note that it is important to make sure you EC2 machine and S3 bucket are in the same region.

 

Converting WMA to high quality m4a audio

Here is a simple one liner for converting wma files to high quality m4a for import into iTunes.

Just run this command from the terminal in each folder that has wma files in it. Note that this requires ffmpeg with libfdk_aac compiled which is the highest quality aac encoder.

for f in *.wma; do ffmpeg -y -i "$f" -c:a libfdk_aac -b:a 192k "${f%.wma}.m4a"; done;

Or if you would like to recursively convert all your files for a mass import into iTunes (this won’t delete anything but it is possible some files may error), just run this in the base directory and it will convert everything.

find . -type d | while read -r dir; do  pushd "$dir";for f in *.wma; do ffmpeg -y -i "$f" -c:a libfdk_aac -b:a 192k "${f%.wma}.m4a";done;ls;popd; done;

References for further info:

High quality audio encoding

AAC Encoding guide

ffmpeg OS X compilation guide

Or if you want the easier way to install ffmpeg and brew you can just do

brew install ffmpeg --with-fdk-aac --with-ffplay --with-freetype --with-frei0r --with-libass --with-libvo-aacenc --with-libvorbis --with-libvpx --with-opencore-amr --with-openjpeg --with-opus --with-rtmpdump --with-schroedinger --with-speex --with-theora --with-tools

 

Details here: http://www.renevolution.com/how-to-install-ffmpeg-on-mac-os-x/


												

Simple split and stitch encoding with ffmpeg

Here is a very simplified example of split and stitch encoding with ffmpeg. Such as setup could be used for spreading encoding across a cluster for parallel encoding or large files or just really fast encoding. It has some limitations in that it needs more than optimal keyframes for best size/quality combinations but on the plus size it would be compatible with segmented delivery of files. Note that the MPEG Transport Stream format has been used as it is the most compatible for stitching back together.

Next I will do some more investigation into the GOP structure generated by ffmpeg in this scenario.

In the first section a source file is broken into 3 x 30s parts (note that this is not the whole clip and is just for demo purpose) and is transcoded into a h264 (libx264) transport stream.

The second step is the stitching back together via a simple concat filter, as the files are a transport stream and encoded with the same settings this works well.

ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part1.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:30.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part2.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:01:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part3.ts
ffmpeg -y -i concat:part1.ts\|part2.ts\|part3.ts -c copy concat.ts

Options for HDS Packaging

Here are some of the options that I am aware of for packaging content as Adobe HDS. They are all commercial software. 1. Adobe Media Server and the f4fpackager tool 2. Wowza Media Server 3. Unifed Streaming Server 4. Nginx HDS module The specification for the manifest format from Adobe is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-media-manifest-specification.pdf And the specification for HDS fragments and complete setup is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdf Other information: PHP Script that can join f4f/f4m: https://github.com/K-S-V/Scripts Note that it also appears there is a ts2hds function in gpac that requires further investigation as it doesn’t appear to be built by default. https://github.com/maki-rxrz/gpac

Creating a mosaic from a video and extracting frames for scene changes

This is a very cool feature buried down in the ffmpeg documentation that let’s you generate a very nice mosaic of pictures from a video based on scenecuts.

Commands:

ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png

Sample result below

preview

You can also use this to output an individual frame for every video, example follows:

ffmpeg -i ../source/dig_720p.mp4 -vf select='gt(scene\,0.6)' -vsync vfr preview%04d.png

The results of this could be used for a preview track of the video as per the below:

preview0018 preview0019 preview0020 preview0016 preview0017 preview0014 preview0015 preview0011 preview0012preview0016 preview0001 preview0002 preview0003 preview0004 preview0005 preview0006 preview0007 preview0008 preview0009 preview0010 preview0011 preview0012

 

Updated HLS encoding and packaging commands for ffmpeg

Here are some updated commands with latest build of ffmpeg for encoding and packaging a file to HLS. Note that this example only covers one bitrate a present and my previous posts still apply for multi bitrate manifest creation.

Step 1: Create a TS mezzanine file (very useful for packaging to multiple formats)

ffmpeg -i ../source/redrock_720p.mp4 -s 1280x720 -c:v libx264 -c:a libfdk_aac -ar 44100 -bsf h264_mp4toannexb -force_key_frames 'expr:gte(t,n_forced*2)' -y -f mpegts redrock_mez_720p.ts

Step 2: Package as HLS

ffmpeg -i redrock_mez_720p.ts -c copy -map 0 -segment_list index_1400.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type hls -f segment segment-%03d.ts -y

This creates a HLS file with 10 second segments.

Sample output of encoded HLS is here: http://bucket01.mscreentv.com.s3.amazonaws.com/videos/redrock720p/index_1400.m3u8