libx265 encode test, how low can you go for 720P

testHere is a really quick encoding test showing some of the outputs using libx265, by encoding test means it is certainly not an accurate representation of best quality comparison but what is does show is not too bad playback at 187kbps video (which is very low 720P!)

Some details of the source (some nature footage off youtube):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Planet Earth - Amazing nature scenery (1080p HD)-6v2L2UGZJAM.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2014-04-25 01:07:55
Duration: 00:13:28.45, start: 0.000000, bitrate: 2292 kb/s

Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 2097 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 192 kb/s (default)
    Metadata:
      creation_time   : 2014-04-25 01:09:08
      handler_name    : IsoMedia File Produced by Google, 5-11-2011

And the result:

Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    encoder         : Lavf56.3.100
  Duration: 00:13:28.50, start: 0.046440, bitrate: 324 kb/s
    Stream #0:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], 187 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : SoundHandler

And to get the idea I have joined together the source and the result here which you can see a quick demo, the one on the left is the source and the right is the result. Now the one on the right is obviously not as good but you are comparing 2000kbps with <200kbps so it is not too bad.

 

 

 

Converting WMA to high quality m4a audio

Here is a simple one liner for converting wma files to high quality m4a for import into iTunes.

Just run this command from the terminal in each folder that has wma files in it. Note that this requires ffmpeg with libfdk_aac compiled which is the highest quality aac encoder.

for f in *.wma; do ffmpeg -y -i "$f" -c:a libfdk_aac -b:a 192k "${f%.wma}.m4a"; done;

Or if you would like to recursively convert all your files for a mass import into iTunes (this won’t delete anything but it is possible some files may error), just run this in the base directory and it will convert everything.

find . -type d | while read -r dir; do  pushd "$dir";for f in *.wma; do ffmpeg -y -i "$f" -c:a libfdk_aac -b:a 192k "${f%.wma}.m4a";done;ls;popd; done;

References for further info:

High quality audio encoding

AAC Encoding guide

ffmpeg OS X compilation guide

Or if you want the easier way to install ffmpeg and brew you can just do

brew install ffmpeg --with-fdk-aac --with-ffplay --with-freetype --with-frei0r --with-libass --with-libvo-aacenc --with-libvorbis --with-libvpx --with-opencore-amr --with-openjpeg --with-opus --with-rtmpdump --with-schroedinger --with-speex --with-theora --with-tools

 

Details here: http://www.renevolution.com/how-to-install-ffmpeg-on-mac-os-x/


												

Debugging Smart TVs and other devices with a transparent proxy

If you are like me and do a lot of work on different devices and need to debug what is going on this little trick can be invaluable. This only takes 3 mins to setup and is invaluable.

Tools you will need:

  1. An OS X based machine (as all good multi device developers should have, could also be done on linux)
  2. A device
  3. A Copy of Charles Proxy (also an essential tools for developers - http://www.charlesproxy.com/
  4. A wifi or fixed network with both devices on the network

What we are going to do is setup the OS X machine to be a router and forward any traffic that is on port 80 and 443 to the Charles Proxy and make sure that Charles has transparent proxy mode enabled. Note that without the SSL certs installed on the devices for the proxy you may need to drop the 443 forwarding.

  1. Enable IP forwarding:
    sudo sysctl -w net.inet.ip.forwarding=1
  2. Place the following two lines in a file called, say, pf.conf:
    rdr on en2 inet proto tcp to any port 80 -> 127.0.0.1 port 8080
    rdr on en2 inet proto tcp to any port 443 -> 127.0.0.1 port 8080

    These rules tell pf to redirect all traffic destined for port 80 or 443 to the local mitmproxy instance running on port 8080. You should replace en2 with the interface on which your test device will appear.

  3. Configure pf with the rules:
    sudo pfctl -f pf.conf
  4. And now enable it:
    sudo pfctl -e

Note I borrowed this form mitmproxy proxy which I will definitely be trying out as sounds like my kind of proxy even though Charles is handy for the formatted JSON/XML views: mitmproxy

Now you need to configure only the default gateway of your device to point at the interface on your OS X machine. Note that on my Macbook using wifi that looks like this:

>ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1453
 ether 20:c9:d0:49:98:31
 inet6 fe80::22c9:d0ff:fe49:9831%en0 prefixlen 64 scopeid 0x4
 inet 10.33.195.97 netmask 0xffffffc0 broadcast 10.33.195.127
 nd6 options=1<PERFORMNUD>
 media: autoselect
 status: active

The last bit is you need to set Charles in Transparent Proxy mode:

Screen Shot 2014-08-27 at 2.47.06 pm

And then with some luck like I had it all works first time and I now have an LG TV that doesn’t support proxy settings working through my proxy in <5 mins without any special hardware.

 

Simple split and stitch encoding with ffmpeg

Here is a very simplified example of split and stitch encoding with ffmpeg. Such as setup could be used for spreading encoding across a cluster for parallel encoding or large files or just really fast encoding. It has some limitations in that it needs more than optimal keyframes for best size/quality combinations but on the plus size it would be compatible with segmented delivery of files. Note that the MPEG Transport Stream format has been used as it is the most compatible for stitching back together.

Next I will do some more investigation into the GOP structure generated by ffmpeg in this scenario.

In the first section a source file is broken into 3 x 30s parts (note that this is not the whole clip and is just for demo purpose) and is transcoded into a h264 (libx264) transport stream.

The second step is the stitching back together via a simple concat filter, as the files are a transport stream and encoded with the same settings this works well.

ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part1.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:00:30.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part2.ts
ffmpeg -y -i anchorman2-trailer.mp4 -ss 00:01:00.000 -t 30 -c:v libx264 -s 640x360 -b:v 1000k part3.ts
ffmpeg -y -i concat:part1.ts\|part2.ts\|part3.ts -c copy concat.ts

Options for HDS Packaging

Here are some of the options that I am aware of for packaging content as Adobe HDS. They are all commercial software. 1. Adobe Media Server and the f4fpackager tool 2. Wowza Media Server 3. Unifed Streaming Server 4. Nginx HDS module The specification for the manifest format from Adobe is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-media-manifest-specification.pdf And the specification for HDS fragments and complete setup is here: http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/hds/pdfs/adobe-hds-specification.pdf Other information: PHP Script that can join f4f/f4m: https://github.com/K-S-V/Scripts Note that it also appears there is a ts2hds function in gpac that requires further investigation as it doesn’t appear to be built by default. https://github.com/maki-rxrz/gpac

Creating a mosaic from a video and extracting frames for scene changes

This is a very cool feature buried down in the ffmpeg documentation that let’s you generate a very nice mosaic of pictures from a video based on scenecuts.

Commands:

ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png

Sample result below

preview

You can also use this to output an individual frame for every video, example follows:

ffmpeg -i ../source/dig_720p.mp4 -vf select='gt(scene\,0.6)' -vsync vfr preview%04d.png

The results of this could be used for a preview track of the video as per the below:

preview0018 preview0019 preview0020 preview0016 preview0017 preview0014 preview0015 preview0011 preview0012preview0016 preview0001 preview0002 preview0003 preview0004 preview0005 preview0006 preview0007 preview0008 preview0009 preview0010 preview0011 preview0012

 

Updated HLS encoding and packaging commands for ffmpeg

Here are some updated commands with latest build of ffmpeg for encoding and packaging a file to HLS. Note that this example only covers one bitrate a present and my previous posts still apply for multi bitrate manifest creation.

Step 1: Create a TS mezzanine file (very useful for packaging to multiple formats)

ffmpeg -i ../source/redrock_720p.mp4 -s 1280x720 -c:v libx264 -c:a libfdk_aac -ar 44100 -bsf h264_mp4toannexb -force_key_frames 'expr:gte(t,n_forced*2)' -y -f mpegts redrock_mez_720p.ts

Step 2: Package as HLS

ffmpeg -i redrock_mez_720p.ts -c copy -map 0 -segment_list index_1400.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type hls -f segment segment-%03d.ts -y

This creates a HLS file with 10 second segments.

Sample output of encoded HLS is here: http://bucket01.mscreentv.com.s3.amazonaws.com/videos/redrock720p/index_1400.m3u8