This note is mainly about basic information about Unity's graphics rendering pipeline and vertex and fragment shaders.
Abstraction of the basic rendering pipeline
The basic rendering process can be divided into three stages, as shown in the figure below
Application stage: occurs on the CPU.
Geometry stage: occurs on the GPU.
Rasterization stage: occurs on the GPU.
CPU Stage
In Unity, what happens in the CPU phase mainly happens on the Camera component. The Camera component will call the Render() function when rendering. This function can package enough information to the GPU so that the GPU knows how to render at the geometry stage.
Some information is processed and packaged by the CPU. This data includes:
Culling
Culling determines what things cannot be rendered, such as things that cannot be seen by the camera (outside the view frustum), occlusion relationships, and the back sides of invisible objects. There are also some manually set rendering layers. object.
Render Queue
Determines the order in which objects are rendered. In Unity, the opaque channel (Geometry) defaults to 2000, and the available range is <2500. Transparent channel (Transparent) defaults to 3000, available range > 2500. The smaller this value is, the sooner it is rendered.
Packaging data
For example, information such as vertex position, vertex normal direction, vertex color, and texture coordinates. We can observe the information exported in (.obj) format, which includes vertex information, vertex normals, uv, and triangle information.
SetPassCall() and DrawCall()
SetPassCall() tells the engine what Shader to use for rendering.
DrawCall() tells the engine that everything is ready on the CPU side, and now the GPU is responsible for the drawing process.
GPU Stage
At the GPU stage, developers are limited in the configurations they can do. In a more basic situation, it can basically be divided into the following steps.
Vertex Shader
We need to know these few things:
The vertex shader operates on a per-vertex basis: its input comes from the vertices packed by the CPU in the previous stage, and the vertex shader is called once for each vertex.
The vertex shader itself does not add or reduce vertices, nor does it know the relationship between vertices (for example, whether it is on an edge, whether it forms a face, etc.).
So why do we need a vertex shader? On the pipeline, the main tasks of the vertex shader are to transform coordinates and calculate lighting. Subsequently, some data required by the fragment shader is also provided by the vertex shader. We usually use the v2f structure to store this information in the Shader file.
The first task of the vertex shader is to convert the vertex coordinates from model space to clip space. It is necessary to understand this well. In the (such as .obj) file passed by the CPU, all coordinate information is calculated with respect to the center of the original model setting. For the specific conversion process, we can go to the notes in the Basics of Mathematics chapter.
Clip
In the CPU stage, we have done the first coarse-grained culling process. At that time, we manually prevented some primitives from being passed to the GPU to improve rendering performance.
In the GPU stage, we need to perform further cropping processing on the remaining primitives that have not been eliminated - the so-called cropping refers to processing objects that are not within the visible range of the Camera.
There are only three relationships between graphics elements and cameras: completely invisible, partially visible, and completely visible. The complete two are easy to handle. If it is not visible, it will not be passed. If it is visible, it will be passed to the next stage of the pipeline. Crop processing is the partially visible part of the graphics element.
The process from model space to clipping space is similar to the process of taking a photo. This part will be described in detail in the mathematical basics section, but for now we can roughly understand it as follows: If we want to describe the position of a camera in space, we basically need the following information about it:
Position: The position of the camera itself in space, which we use the Position vector to represent.
The direction in which the top of the camera points (Up): used to describe the direction of the camera, represented by the unit vector Up.
The direction of the camera (Look): used to describe the direction of the camera alignment, represented by the unit vector Look.
The direction to the right of the camera (Right): used to describe the rotation of the camera, represented by the unit vector Right.
With these 4 vectors, we can accurately describe the transformation of each space.
1. From model space to world space: left-multiply the Model Matrix. After this step, we can transform the original coordinate system with the origin in the model as the origin into a coordinate system with the world origin as the origin.
2. From world space to camera space: left multiplied by View Matrix.
3. From camera space to clipping space: left multiplied by Projection Matrix. The FOV here refers to the Field of View, which is the perspective. aspect_ratio refers to the ratio of the camera's width to its length. Zn and Zf are the z-coordinates of the near and far clipping planes respectively.
In Unity, we can use the built-in MVP matrix to complete the coordinate conversion from model space to clipping space in one step.
Cropping will be done in Normalized Device Coordinates (NDC).
Screen Mapping
The 3D coordinates of the clipped object to be rendered within the NDC unit cube should be clear by now. The next task should be to convert the coordinates of the primitive to the screen coordinate system.
Basically we can think of this step as mapping the original coordinates on [-1, 1] to the coordinate range of the screen resolution. But the screen is 2D, so this step doesn't handle z coordinates.
The positive direction of the y-axis is different on OpenGL and DirectX. This may cause the output image to be reversed in the y-axis direction when writing Shaders in the future. When encountering this problem, simply *= -1 will do the trick.
Rasterization stage
In the geometry stage, we have processed the vertex information. At this step, our goal is to calculate the pixels covered by the primitives and the colors of these pixels. The process of calculating the color that each pixel should display is called Rasterization.
Primitive Assembly
The purpose of this step is to check which pixels will be covered by the triangle. The process of drawing triangles in a pixel grid will not be described in detail here. In summary, it is to look at which pixels the boundary of the triangle passes through, and then linearly interpolate the internal pixels using the information of the three vertices. Granted, not every pixel has information from a vertex - a vertex is a point, and it will only end up within a pixel. Therefore, we need to obtain the information of all other points on the triangular surface through interpolation (Interpolation). The so-called interpolation is to find a suitable algorithm to insert the middle value when the values at both ends are known.
If a pixel is covered by a triangle, a Fragment will be formed on this pixel. Fragments are not pixels, because fragments contain a lot of additional information, such as screen coordinates, depth information, normal coordinates, etc.
The result of primitive rigging is that the GPU now knows which pixels need to be shaded. In this step, we will output an entire fragment sequence. The color of each member of this sequence is processed by the next fragment shader.
Fragment Shader
In all the steps we have done before, we have not actually done the real Shade operation, that is, coloring. Although the vertex shader is called a shader, it actually only provides all the information of the vertex and performs coordinate conversion operations. When it comes to the fragment shader, we really have to start thinking about what color to display on each pixel.
We have also completed the interpolation result in the previous step of primitive assembly. This part is also the input value of the fragment shader. In other words, now each fragment knows its corresponding coordinates, normals and other information. We need to start calculating their display color based on this information. This is the output value u>.
Per-fragment Operations
The limitation of the fragment shader is that it only cares about one fragment and what can happen to the fragment on this pixel.
The purpose of the fragment-by-fragment operation is to determine whether a fragment is ultimately visible. A fragment may eventually become invisible due to a series of reasons such as being occluded or masked. So this piece needs to go through a series of tests. If it doesn't pass any of them, it's simply discarded.
Stencil Test (Stencil Test): The test at this stage is only used to compare the relationship between a certain value on the fragment and a certain reference value set by the developer. If it fails to achieve the purpose required by the developer (for example, it is greater than this reference value), it will be discarded.
Depth Test (Depth Test): Compare the depth value of the fragment with the depth value already existing in the buffer. A fragment that fails the depth test does not have the right to modify the value in the current depth buffer, but a fragment that passes the depth test can also choose not to refresh the value in the depth buffer. This step is decided by the developer. Many transparency effects require turning on/off writing in depth testing.
Color blending (Blend): If all tests are passed, then the color blending process will finally come. If it is a non-transparent object, we can directly use one of the colors to replace other colors in the color buffer (of course there are other processing methods, it mainly depends on the desired effect). For transparent objects, you may need to consider more complex blending algorithms, such as Photoshop’s classic layer blending effects such as Multiply and Overlay.
Reference materials:
World, View and Projection Matrix Internals, http://gsteph.blogspot.com/2012/05/world-view-and-projection-matrix.html
The Essentials of Getting Started with Unity Shader, People's Posts and Telecommunications Press, first edition, June 2016, written by Feng Lele
Comments