Semantics in Shader

· 2 min read
Semantics in Shader
Semantics in Shader

After the example in the last article(), we have seen a couple of places using Semantics. In this article, we are going to have a look at the most common Semantics.

What is Semantics

Semantics are used to label the inputs and outputs of Shader functions, so that the graphics pipeline and developers know what are they and how to use them. For example:

float4 frag () : SV_Target {
  float4 color = lerp(float4(1, 0, 0, 1), float4(0, 0, 1, 1), sin(20 * _Time.y));
  return color;

In the above Fragment Shader (Don't know what is a Fragment Shader? Check this post: SubShader in Unity), it will output the color along with time between Red(float4(1, 0, 0, 1)) and Blue(float4(0, 0, 1, 1)) in a linear way. Note that we have the SV_Target semantic at the end of the function, which is an output struture tell the render target to which the pixel data is to be written. In other words, it tells the GPU which color buffer to write the pixel data to.

Syntax of Semantics

[ Modifiers] ParameterType ParameterName [ : Semantic] = [ Initializers]


 float4 vertex : POSITION;

float4: variable type,

vertex: variable name, we can use any name we like.

POSITION: semantic, tell the compiler this type of data (float4) can be treated as the position of a vertex. Without POSITION, the compiler will only treat the vertex as a common float4 variable.

Why need Semantics

Before Semantics comes up, developers often used registers to bind data. This approach had two obvious drawbacks: first, the compiler could not optimize the code, and second, maintaining portability was quite hard.

Common Semantics

There are few common semantics including:


The POSITION semantic is used to define the position of a vertex in local space of the mesh. Which is the vertex relative to the center of the mesh, and is specified as a 3D vector. The last value is normally set as 1.0 as in the 3D coordinator system.


The SV_POSITION (system value position) semantic is used to define the position of a vertex in screen space, which is the position of the vertex on the screen after all transformations have been applied, and is specified as a 4D vector, with the fourth component being the homogeneous coordinate. The SV_POSITION semantic is used to output the final position of the vertex from the vertex shader. The pixel shader then uses the position to determine the color of the pixel that will be drawn.


COLOR semantic is used to define the color of a vertex or a fragment. It represents the interpolated color of a vertex, which is determined by the interpolation of the color values of the vertices of the triangle that the fragment belongs to.

For example, in the Vertex Shader, you probably already set the color of each vertex using the COLOR semantic. Later in the Fragment Shader, the COLOR semantic is used to get the interpolated color of the fragment from the colors of the vertices of the triangle. This interpolated color can then be used to shade the fragment.

The COLOR semantic is commonly used along with the TEXCOORD semantic, which is used to pass texture coordinates from the Vertex Shader to the Fragment Shader. By combining the interpolated color and the texture data, you can create a wide variety of visual effects.


SV_Target is a semantic used in the output structure of a Pixel Shader to indicate the render target to which the pixel data is to be written. It tells the GPU which color buffer to write the pixel data to.


Semantics connect between compiler and outer (DirectX, XNA and so on)