Rendering pipeline in Shader

· 2 min read
Rendering pipeline in Shader
Rendering pipeline

The rendering pipeline, also named the graphics pipeline, is a series of stages that a graphics assets goes through to render an image on the user's screen. The reason why the rendering pipeline is called a "pipeline" is because it is similar to the concept of a pipe. We can think of the end of this pipe as being connected to our final display screen, while the starting end is connected to our original source material.

The pipeline can vary depending on the graphics API and hardware being used, but a typical modern pipeline can be broken down into the following main stages:

1. Application Stage

This is the preparation phase for our original assets including our models, textures, camera, and lighting, etc. After this phase, all the input data will be converted into rendering primitives (such as triangles or points) and submitted to the next phase (geometry stage).

2. Geometry Processing Stage

This stage mainly processes the data passed from the last stage at the vertex level, including various matrix transformations and vertex shading. After processing, it outputs the two-dimensional vertex coordinates and vertex shading information in screen space, and then submits them to the next stage (rasterization stage).

3. Rasterizer Stage

This stage involves converting the geometric primitives, such as triangles, into pixels on the screen. This stage might also involves with Shading, which determines the color of each pixel by taking into account the lighting conditions, the texture maps, and the material properties of the surfaces being rendered.

4. Output Merger Stage

The final color of each pixel is combined with the depth and stencil values and written to the frame buffer. The frame buffer is then displayed on the screen.

From the example of last article Introduction to Shader in Unity, we can roughly see the stages in the code:

Shader "Custom/BlinkShader"
{
    Properties {
        _MainColor ("Main Color", Color) = (1, 0, 0, 1)
        _TimeSpeed ("Time Speed", Range(10, 20)) = 15
    }

    SubShader {
        Tags {"Queue"="Transparent" "RenderType"="Opaque"}

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            // Unity built-in variable that holds the position of the vertex in world space
            float4 _WorldSpacePos;

            // Properties
            uniform float4 _MainColor;
            uniform float _TimeSpeed;

            struct appdata { // 1
                float4 vertex : POSITION;
            };

            struct v2f { // 2
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v) { // 1
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex); // 2
                return o;
            }

            float4 frag () : SV_Target { // 3
                float4 color = lerp(float4(1, 0, 0, 1), float4(0, 0, 1, 1), sin(20 * _Time.y));
                return color;
            }

            ENDCG
        }
    }
}

1: struct appdata is holding the data of POSITION, and will be passed to the Vertex Shader, this process is Application Stage.

2: The POSITION data is converted into SV_POSITION by the method UnityObjectToClipPos, this process is Geometry Processing Stage

3: In the frag(), we generate new color for each of the pixel is called Rasterizer Stage. In this stage, as it is quite simple, it also involves merging the final output and display on the screen.