The final output of the 3D animation pipeline is a digital video file; ready to be distributed and/or displayed offline or online. An animated video, just like any other type of video, is in fact a sequence of many images or frames, played with a certain rate to create the illusion of movement on the screen.
There are different options out there regarding the output formatting of a 3D animation project. But most animation studios try using common formats to make sure whatever they produce is compatible with a wide variety of devices and can be easily shared and displayed on the internet or elsewhere.
In this article, we are going to take a look at basic digital video attributes and explain some of the important factors in selecting the right format for an animated video:
You might already be familiar with the term. Resolution simply refers to the number of pixels present on a digital image (or similarly, a digital display). For instance, a 1920×1080 video (also known as 1080 HD), has literally 1920 pixels on the horizontal axis, and 1080 pixels on the vertical. The total number of pixels would be the multiplication of the two; obviously.
Digital displays are set at a particular resolution. The larger the screen, the higher the resolution (likely). So it is important to consider the size of the screen the video is intended for (higher resolution for larger screens) when rendering the final output of a 3D animation project.
2. Aspect Ratio
The aspect ratio of an image/video refers to the proportional relationship between its width and its height; expressed by two numbers separated by a colon. The aspect ratio concerns the relationship of the width to the height, not size.
Different devices such as monitors, televisions and cellphones or even cinema screens have various sizes with different ratios. 16:9, 4:3 are two of the most commonly used ones. The aspect ratio of the final 3D animation will be set based on the common aspect ratio of its target broadcast channels.
3. Pixel Aspect Ratio
Most digital imaging systems display images in a grid comprised of tiny, uniform square pixels. Some systems, however, display images in grids with rectangular pixels. Pixel aspect ratio refers to the relationship between the height and width of the pixels.
4. Bit Rate
Bit rate refers to the amount of data stored for a single second of a video file (bits per second or bps). The higher the bit rate, the better the quality; meaning less compression and larger file size. The overall size of a video file would be equal to its bit rate multiplied by the video duration. Extremely high video bitrate can place a major strain on the hardware; resulting in stutters in streaming or playback. Therefore, it is important to consider the optimum bitrate for the purpose in mind.
5. Frame Rate
Frame rate refers to the number of frames which appear on the screen per second (frames per second or fps). The term applies equally to film, video and animation. It is also called the frame frequency and is expressed in hertz. 24 fps, 30 fps, 60 fps are the most popular frame rates today. The higher the framerate, the smoother the movements in the video.
In order to save a digital video file, you have to be able to order data in a specific way or reorder it to make the file smaller for easier distribution or playback. The act of reduction or reordering the data is called compression. Compression can greatly reduce the file size; at the expense of changing the quality and usability of the files (facilitating it mostly).
There are two basic types of image or video compression: lossless and lossy.
6.1. Lossless Compression
Lossless Compression does not allow for any loss of quality; therefore, the file size would not be small. The most important objective here is solely to conserve the original quality.
6.2. Lossy Compression
Lossy Compression enables us to shrink the file size. Most of the time, the quality loss is not noticeable because a minimum damage threshold is usually kept in mind while cutting down the file size. But, all in all, shrinking will result in quality loss in exchange for smaller size and faster playback.
6.3. Spatial Compression vs. Temporal Compression
There are two common types of lossy compression: Spatial and Temporal compression.
Spatial or intraframe compression looks at each frame of the video individually; compressing the pixel information like it’s one single image. But temporal or interframe compression takes place over a series of frames.
In temporal compression certain frames in the video are chosen and all their pixel information are written down. Then, over the in-between frames, only the information that indicate a difference from the key frames will be written and the data for repeated pixels will be deleted to save on the file size.
Video or image compression is in fact a duo, comprising of two contrary actions: compressing and decompressing. A codec (short for Compression/Decompression) is a software program that compresses a video, used to start and complete this process. The codec allows us to store, view or edit a video file.
H.264 is one of the most common codecs used for high-definition digital video today. It is currently the industry-standard for high-definition video and is frequently used by 3D animation studios like Dream Farm and its clients.
8. Container or file extension
Every video file has a container in addition to codec. Container or file extension contains the video, audio and its metadata. Video file extensions are often seen at the end of the file name; such as AVI, MOV or MP4. Among them, MP4 is widely used in various industries, including the animation industry.
Why MP4 extension?
MP4 files use separate compression for video and audio, store the metadata along them and conserves the quality to a great extent despite compression. It was first introduced in 1988 and has become the typical file format for sharing videos over the internet since then.
MP4 offers seamless compatibility with online and offline players on a wide variety of devices, platforms and operating systems; such as Apple, Microsoft, smartphones, tablets, TVs, etc. Even PowerPoint presentations can contain MP4 videos.
Why H.264 codec?
H.264 is by far the most commonly used standard for high definition digital video recording, compression and distribution.
- It uses half the space of MPEG-2 (DVD standard) with the same quality.
- It enables the delivery of high-quality video at low data rates.
- It is used by the majority of video industry developers.
- It supports resolutions up to and including 8K UHD.
- It oﬀers greater flexibility in terms of compression options and transmission support.
- It provides integrated support for transmission or storage that help minimize the eﬀect of transmission errors.
The h.264 codec is more sophisticated than its earlier counterparts and takes significantly more processing power to compress and decompress video.
An animated video is, just like any other type of video, a sequence of images with certain properties, played at a defined rate to simulate motion on a digital screen. So, in order to be displayed beautifully and correctly, the final output of the 3D animation pipeline needs to be created with optimum settings.
There are various options available out there regarding the final output format. Different technologies and standards have been developed for different applications over the years. But film or animation studios are bound to use the most common formats to make sure their productions are compatible with most devices and applications; both offline and online.
Resolution, aspect ratio, bit rate, frame rate, compression, codec and file extension are among the most important variables regarding the format of an animated video file.