Encoding guidelines

The following sections provide encoding guidelines.

GOP (Group of Pictures) interval

Flash can only change bit rates at GOP intervals. As a result, you must carefully select how far apart you space them. GOP can also be called keyframes or I-frames in your encoding software.

  • If you space your intervals closer then two seconds, your video will react quite quickly to heuristics-recommended bit rate changes, but quality will suffer slightly.
  • If you space the intervals too far apart, your video will often react too slowly for the heuristics changes, and seeking ability will also suffer. It is recommended to space your GOP between two and four seconds apart, drifting closer to two as the bit rate increases.

It is also recommended that you keep your GOPs static across all bit rates, as success has been to use GOP intervals that change as the bit rate changes. However, this might cause a small amount of additional load on the server and can affect the time at which the server is able to switch to the new bit rate.

Also, GOPs should be closed and of a constant size, and the audio for your videos should be encoded at the same bit rate and sample rate.

Frame rate

Frame rate on your encoder should be configured based on target devices to which the live content will be served. To avoid performance issues and enhance better playback quality, it is recommended you use a 30-frames-per-second frame rate.

Scene detection

Some encoders have an option to enable scene change detection, which allows IDR key frames to be inserted when a scene change occurs. This improves visual quality by allowing the entire frame to be redrawn when necessary. Due to the extra keyframes/IDRs possible with this setting, it is possible to raise the overall bit rate of the video, thereby increasing the likelihood of rebuffering. Also, in order to maintain the configured bit rate of the video during live encoding, the encoder can only include extra keyframes/IDRs for higher bit rates. This might potentially cause switching issues during playback, and it is therefore recommended you disable scene change detection for live streams.

Timestamp alignment

If you are using multiple encoders for your Dynamic Streaming event, you must align their time-stamps so they are in agreement.

Interlaced content

Interlacing is often found on content originally created for display on television, as opposed to a digital device. This type of footage is created by running “half frames” at twice the frame rate, by drawing every other line and then filling in the remaining lines on a second pass. On a digital screen, both frames must be combined and displayed at the same time. This results in noticeable lines in the footage, which is particularly bad when there is motion in the video.

There are several methods available to de-interlace content, each with its own benefits, but the recommendation is to correct the interlaced footage as early in the production process as possible to ensure the highest quality. Due to how the de-interlacing process works, it is very important that it be done before applying additional modifications such as frame scaling. Attempting to de-interlace footage that has been modified from its original state will produce noticeably bad results.

Encoder CPU load

For live encoding, encoders are generally configured to transcode the input stream into multiple renditions (characteristics such as, bit rate, frame size might be different across renditions.). Configuring many renditions and enabling many processing filters can increase load on the encoder, and at higher loads, the encoder can possibly lag behind in publishing content, can miss alignment, and can drop frames. So, it is best if the peak CPU usage on your encoders does not exceed 70%.