SubShader in Unity

· 3 min read
SubShader in Unity

As mentioned in the Introduction to Shader in Unity, we all know that a Shader must contain at least one SubShader, and maybe more than one SubShaders, which is the core function to render the materials. Take a further look at the example program in that article:

Shader "Custom/BlinkShader"
{
    Properties {
        _MainColor ("Main Color", Color) = (1, 0, 0, 1)
        _TimeSpeed ("Time Speed", Range(10, 20)) = 15
    }

    SubShader {
        Tags {"Queue"="Transparent" "RenderType"="Opaque"}

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            // Unity built-in variable that holds the position of the vertex in world space
            float4 _WorldSpacePos;

            // Properties
            uniform float4 _MainColor;
            uniform float _TimeSpeed;

            struct appdata {
                float4 vertex : POSITION;
            };

            struct v2f {
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v) {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                return o;
            }

            float4 frag (v2f i) : SV_Target {
                float4 color = lerp(float4(1, 0, 0, 1), float4(0, 0, 1, 1), sin(20 * _Time.y));
                return color;
            }

            ENDCG
        }
    }
}

Tags in SubShader

We will cover this in a separated article: Tags in SubShader.

Pass in SubShader

Notice that, we have Pass in the SubShader, so what is the Pass? Pass is a block which is responsible for rendering a specific subset of the geometry in the mesh, and each pass can have its own unique set of rendering states, such as the lighting model, texture mapping, blending, and so on.

SubShader has a least one Pass , but is able to have more than one Pass. Normally Pass block will start from CGPROGRAM and end with ENDCD, where we will put the actual rendering code between.

Pass
{
  CGPROGRAM
  #pragma vertex vert // 1
  #pragma fragment frag // 2
  ENDCG
}

In this pass, first we need to define what kind shader we are going to use, here we have two:

  1. declare vertex shader with name vert

2. declare vertex fragment with name frag

Vertex Shader

Continue from above, after declare vert, now we can implement how we are going to render the vertext:

struct appdata {
  float4 vertex : POSITION;
};

struct v2f {
  float4 vertex : SV_POSITION;
};

v2f vert (appdata v) {
  v2f o;
  o.vertex = UnityObjectToClipPos(v.vertex);
  return o;
}

First, we have the struct named appdata which only has a property of float4 (For more information about the data type float4, have a check on this article: CG/HLSL Data Types in Shader) and POSITION is its semantic, simply put we can treat vertext is position type.

Then we have the struct named v2f which only has a property of float4 and SV_POSITION is its semantic, simply put we can treat vertext is position type. For for details about the POSITION and SV_POSITION, please have a check on the Semantics in Shader.

Last is the method which use a method UnityObjectToClipPos provided by Unity to translate the a vertex's position in local space into screen space.

Fragment Shader

Beside the Vertex Shader, we also need a Fragment Shader (also named Pixel Shader in other context) which will render the model pixel by pixel and decides how it looks like on a screen.

float4 frag () : SV_Target {
  float4 color = lerp(float4(1, 0, 0, 1), float4(0, 0, 1, 1), sin(20 * _Time.y));
  return color;
}

Notice that we don' have any input for the frag()here in our case.(可以有input是什么呢). Therefore we want to find a color between Red (float4(1, 0, 0, 1)) and Blue (float4(0, 0, 1, 1)) along with time elapsed, we can use multipe methods to achieve that, here we are using the lerp which will give us an interpolate between Red and Blue in a linear way.

In the end, frag() will return the color in SV_TARGET, meaning the next color value going to output on screen.

Summary

We have gone through the details of a Subshader, which will contain at least one Pass, Within each Pass, the core code is in between CGPROGRAM and ENDCG. We also need to provide a Vertex Shader and a Fragment Shader to let the Shader to render the model on screen.

In real world, if we want to render a triangle, Vertex Shader only need to run 3 times, but the number of Fragment Shader execution will depends on how many pixels do we have on the screen. So from the perspective of performance, we should try to put computing job in the vertex shader as much as possible, and at the same time we should also simplify the algorithm in Fragment Shaderto save expenses.