During the rendering process, what has been done in 3D modeling, rigging, animation, shading, texturing, VFX, and lighting will be merged together and rendered into 2D still images (frames). The renders will then be fed into the post-production phase of the 3D animation pipeline.
Let’s take a quick look at the rendering stage of the 3D pipeline.
What is 3D rendering?
Simply put, 3D rendering is the process of producing an image based on three-dimensional data stored on a computer. It is much like photographing or filming a virtual 3D scene.
In the background, the rendering hardware and software have to do a massive amount of mathematical calculations in order to translate the 3D scene’s data into images. The entire look of the final render is therefore determined by the modeling, texturing, shading, lighting, and animation information combined together and rendered, pixel by pixel.
The 3D rendering technology is not limited to the 3D animation industry; it is everywhere: in magazines, on TV, on book covers, in advertisements, and all over digital media.
What are render passes in 3D rendering?
The rendering of a 3D scene is often performed in many separate layers or Render Passes, such as background, foreground, shadows, highlights, et cetera. These layers will then be united again in the compositing stage (post-production).
Render passes give more control over different aspects of a scene. For example, if you’re creating an explosion of a car, the fire, the smoke and the exploding car itself will be rendered separately. This way, you will be able to fine-tune your adjustments, create rendered variations and choose the best one without having to render the entire scene again and again.
What are the most versatile rendering methods?
Since the emergence of 3D rendering technology, different methods have been developed based on various needs; from non-realistic wireframe rendering to advanced realistic ones. Each of these methods is better suited for a special purpose.
The rendering process can be computationally expensive. However, the ongoing increase in the processing power of computers, especially in recent years, has enabled us to create 3D animated content with much higher quality much faster.
Based on the amount of time needed to render a single image, rendering methods can be categorized into two general types; although the line between the two is becoming increasingly blurred in terms of quality:
- Real-time Rendering:
As the name suggests, real-time rendering methods are fast enough to calculate and display an as high as possible degree of photo-realism in real-time for interactive media such as video games and simulations at a minimum rate of 20 frames per second.
- Non-real-time Rendering:
Non-interactive media such as feature films, animated series, or short animations can have much more detail and therefore need more time to be rendered. This extra time can enable the 3D animation studio to leverage limited processing power to produce animated content with much higher quality. Rendering each frame can take from a few seconds to several days; depending on the complexity level of the scene. Displaying these frames sequentially at the right rate will eventually create the illusion of movement in the viewers’ eyes.
3 of the most used rendering techniques
Despite the above categorization, there are a number of computational techniques to perform the rendering process; each of which has its own advantages and disadvantages. These properties make each one the right choice in its own way. Only one of these techniques will be used in a single project.
The scanline technique renders the images on a polygon basis instead of pixel and is most useful for real-time rendering or interactive media; where speed is a determining factor. It can achieve an acceptable level of quality in a much higher frame rate when combined with baked lighting.
2. Ray tracing:
Ray tracing is capable of achieving greater photorealism at the cost of speed. In this technique, one or more rays of light are traced from the camera to the nearest objects and then through a number of bounces; creating effects such as reflection, refraction, scattering, and dispersion based on the material it hits.
Each pixel’s color is also calculated based on the interaction between the light ray and the material of surrounding virtual objects. Ray tracing is mostly used for applications like still images or visual effects where speed is not a critical factor and photorealism matters.
Radiosity is a surface-by-surface, camera-independent calculation technique which accounts for indirect illumination or bounced diffused light. The illumination on a surface comes not only directly from the light sources, but also from other surfaces reflecting light. Soft graduated shadows and color bleeding are among the major characteristics of renders created by the Radiosity technique. Being a viewpoint independent technique also results in an increase in calculations and therefore rendering time. But in terms of quality and the degree of photorealism, it would definitely be worth it.
What is a 3D rendering engine?
A 3D rendering engine is a module in 3D software which is responsible for performing the calculations needed to generate the graphic output of a 3D scene. In other words, the rendering engine takes the 3D models as well as camera, texturing, lighting, and shading data and turns them into a series of pixels that can be displayed as an image.
Rendering engines take advantage of the processing power of the host CPU or GPU to perform their calculations. Today, many rendering engines are available on the market; in the form of a software package’s proprietary render engine, a plug-in, or a standalone. However, there are a number of rendering engines that are most commonly used in the 3D animation industry.
What are the best rendering engines available?
There are a wide variety of rendering engines available today including Arnold, Redshift, Renderman, V-Ray, Corona, etc. Here at Dream Farm Studios, we use these two:
Arnold is an advanced ray-tracing rendering engine, best suited for animated feature films and visual effects available in both CPU and GPU versions. Many animation studios around the globe, including Sony Pictures Imageworks, use Arnold as their main render engine. It is the built-in interactive renderer for Maya and 3Ds Max software packages.
Some of the major features of Arnold rendering engine include:
- Photo-realistic renders
- Easy to use
- Memory Efficient
- Easy to switch to
GPU-based renderer engines like Redshift were designed to make 3D art creation faster. Redshift is a powerful rendering engine created for high-end production rendering, developed by software and video game veterans. A large number of animation studios of every size and creative individuals use this 3D rendering engine for a variety of CG applications nowadays.
Some of the major features of Redshift rendering engine include:
- Ease of Use
- Lightning Speed Rendering
- Versatility and Photo-Realistic Results
- Seamless Integration
- Render Farm Support
How do rendering studios use 3D rendering hardware?
The recent increase in processing power of rendering hardware and simultaneous falling of the prices makes home-scale 3D animation production accessible on a home computer.
However, 3D animation studios like Dream Farm typically need to use a more efficient hardware setup called a “render farm” to generate render images much faster. A render farm is a high-performance computer cluster, built exclusively to render computer-generated imagery. For example, if a single computer can render 400 frames in 4 days, a render farm made up of 5 computers can do the same in 1 day or even less, by splitting it.
3D rendering, which is closely tied to lighting and 3D VFX procedures, is a technically complex and final step of the 3D animation production phase of the pipeline. All the calculations necessary to transform 3D models with all their unique properties into still images (and then videos) are to be performed during this stage.
Apart from 3D animation, 3D rendering is an integral part of multiple industries such as architecture, special effects, and product development. Therefore, there is a wide variety of rendering software available today; each of which suits a particular application most.
Every scene of a 3D animated video is most often rendered into multiple layers including objects, colors, background, foreground, et cetera. The layers are going to be integrated again in the post-production stage (Compositing).
2D animation has also seen a sort of renaissance in the last couple of years – with many people yearning to see something of retro style to their animation
Hi, I’m curious if you guys have figured out how to render real-time in Unreal Engine from a 3ds max animated project.
Hi Jonathan, thx for stopping by. To answer your question, we currently use MAYA for most of our projects but we recently started working with Unreal Engine too. In the case of 3dMax, we don’t use this software for now since the pipeline is much different compared to MAYA.