Understanding Rendering
3D graphics are rendered from basic three-dimensional models called wireframes. A wireframe defines the shape of the model, but nothing else. The rendering process adds surfaces, textures, and lighting to the model, giving it a realistic appearance.
In TRS18, the new E2 engine is responsible for rendering the 3D models onto your screen.
This page provides links to the various elements that are part of the render process.
Contents |
Alpha masking
Alpha masking refers to a special case of alpha blending where all pixels are rendered as either fully opaque (100% alpha) or fully transparent (0% alpha) with no partial transparency. This can be treated as a special case because it does not require alpha sorting in order to produce visually correct results. On most hardware, this provides a significant performance win over an equivalent alpha sorted material. On some hardware, this is substantially slower than an opaque material. This technique also guarantees correct visual outcomes, whereas alpha sorting is inevitably an approximation which will show incorrect results in certain edge cases. Sub-Topics:
- Materials which support alpha masking
- Control of alpha masking via texture.txt file
More Info: Alpha-mask
Alpha blending
Alpha blending is a technique where a model is rendered as partially transparent. Without alpha blending, each screen fragment affected by a given model is simply overwritten by the model's color at that location. With alpha blending, a blend occurs between the fragment's original coloration and the model's color at that location. This blend is controlled by an opacity strength value, known as "alpha". There are two typical sources used to determine alpha while rendering; the first is a per-object alpha value set at runtime. The second is a per-texel value set in the albedo map. Each value ranges from 0.0 (fully transparent) to 1.0 (fully opaque.) These are multiplied together to give a per-fragment alpha value used for blending.
More Info: Alpha_blending
Draw Call
A Draw Call is the act of submitting geometry to the GPU for rendering. There is a strong correlation between the number of draw calls used to render a scene and the overall scene performance. Draw calls are not directly controlled by the content creator, but various content creation techniques and detail settings can directly or indirectly affect the number of draw calls required.
More Info: Draw_call
Mitigating Draw Call Overhead
There are a variety of techniques used internally by E2 to attempt to mitigate the draw call overhead:
- Mesh stitching and hardware instancing, where applicable, each help collapse a large number of draw calls into
a single draw call.
- Individual draw calls which share a single material may be grouped together to reduce state changes.
More Info: Draw_call
Fragment
A fragment is a rendering concept which is roughly analogous to a single pixel on the output display (as opposed to a texel on an input texture). In practice, there is not a 1:1 mapping between fragments and device pixels for a large variety of reasons, the most obvious of which is anti-aliasing (which introduces additional fragments for some pixels). When rendering a given primitive, the fragment shader is run once per fragment hit (with the possible exception of fragments that are early z-culled). When a fragment is hit by multiple overlapping primitives, overdraw occurs. This is considered a waste of GPU performance and should be minimized.
More Info: Fragment
Index buffer
An index buffer is a component of a render chunk which stores an array of indices. Each "index" is an integer which indexes into the corresponding vertex buffer. Each triplet of indices in the index buffer defines a single polygon (also known as a "triangle" or "primitive"). This indirect technique allows multiple polygons to share vertices- this is a significant performance saving as a single vertex consumes far more memory (and memory bandwidth) than a single index, and a shared vertex only needs to pass through the vertex shader once. Sub-Topics:
- Correlation with Polygons
- Impact on Performance
More Info: Index_buffer
Hardware instancing
Hardware instancing is a technique which allows the GPU to render multiple identical meshes in a single draw call, varying a limited amount of state (such as position and orientation) for each mesh. This is used to reduce the cost of drawing a moderate number of identical low-to-mid-detail meshes. Where mesh stitching is possible, it is currently used in preference to hardware instancing. Strictly speaking, instancing is performed per render chunk rather than per mesh.
More Info: Hardware_instancing
Overdraw
Overdraw is a rendering term which refers to cases where the same output fragment is rendered multiple times. This occurs when multiple polygons in the scene overlap the same fragment(s). There are two possible outcomes from any given instance of overdraw:
- The fragment is rendered. The full fragment processing cost is paid.
- Especially for high-end materials such as those which use Parallax Occlusion Mapping, this cost may
be significant.
- But don't overlook this cost, even for lighter materials. It adds up fast.
- The fragment is z-culled. A lesser cost is paid.
The content creator has little say in which outcome occurs for any given instance of overdraw. This is very dependent on optimizations in the Engine (which will do its best, but which priorities correct results over fast outcomes) and the GPU hardware (which varies per manufacturer and per hardware generation). Regardless of which outcome results, overdraw is bad. It wastes a lot of GPU time (which is typically the major bottleneck to per-frame performance) on processing something which will typically make no user-visible difference to the scene. It is typically worth incurring small costs elsewhere (for example, small increases to object polygon count) to reduce overdraw.
More Info: Overdraw
Polygon
A polygon (also "primitive" or "triangle") is the most basic element of a 3D mesh. Each polygon is defined by three indices, each of which specifies a single vertex. Sub-Topics:
- Correlation with Indices and Vertices
- Impact on Performance
More Info: Polygon
Render Chunk
A render chunk, or simply "chunk", is the encapsulated data required to issue a single* draw call. This includes a material, a vertex buffer, an index buffer and a small amount of metadata. A 3D mesh is comprised of a number of render chunks. The render chunk's material may be shared with other render chunks in the same mesh or other meshes in the same asset. Render chunks are a data packet (ie. noun) rather than an action on the GPU (ie. verb). The act of submitting a render chunk to the GPU for rendering is known as a draw call.
More Info: Render_chunk
Texel
A texel is a single pixel of a texture. This is a disambiguation from "pixel" which typically refers to an on-screen pixel.
Many texels may be processed to generate a single pixel.
Material
Materials are a concept in 3D rendering that describes how to render a particular surface. Each render chunk is comprised of a single material, a vertex buffer, and an index buffer.
More Info: Materials | Understanding Materials
Vertex Buffer
A vertex buffer is a component of a render chunk which stores an array of vertices. A vertex buffer in E2 is limited to 65k vertices; any mesh which is larger than this will be split into additional render chunks, each with their own vertex buffer. Sub-Topics:
- Vertex Format
- Correlation with Polygons
- Correlation with Indices
- Impact on Performance
More Info: Vertex_buffer