This series of notes is intended to help prepare for technical interviews for computer graphics-related positions, such as Graphics Engineer, Rendering Engineer, or Technical Artist. Rather than providing detailed explanations of the concepts, I'll be listing key terms, concepts, and bullet points, with occasional brief elaborations sourced from Wikipedia or ChatGPT.
This section serves as a summary of all the basic concepts related to computer graphics.
Transform
Math Basics
Vector arithematics: dot product, cross product, constructing a basis from a single vector
Geometry: Triangle barycentric coordinates
Transform
Homogeneous Coordinates: linear / affine transforms
Rotations: based on XYZ space, quaternion, polar coordinates
Normal Transform: the transpose of the inversed matrix
Rendering Pipeline
MVP Transform
Model Transform: from Object space to World space, following SRT order (Scale, Rotation, Translate).
View Transform: from World space to Camera space
Project Transform: from Camera space to Clip space
Rendering Pipeline
Application Stage: from CPU to GPU, wrap vertex information
Vertex Stage: happening on Vertex Shader @ GPU, transforming the vertex space
Rasterization Stage: happening between Vertex Shader and Fragment Shader. Interpolation happens.
Fragment Stage: happening on Fragment Shader @ GPU, calculate the color, as well as blending opacity, test for alpha / stencil / depth.
Shading Models
Trivial Models
NdotV
NdotL
Emperical Models
Phong Model: Diffuse + Ambient + Specular [ (R · V)^n ]. R is the reflected vector.
Blinn-Phong Model: Diffuse + Ambient + Specular [ (N · H)^n ], H is the halfway vector, which is normalization of (L + V).
Light Sources
Punctual Lights: hard shadow, no spatial volume, point / spotlights
Area Lights: soft shadow, light source has a shape.
Lambert Law: E = Icosθ/(r^2).
Signal & Sampling
Sampling Theorem
Artifacts / Aliasing: sample rate too low to reconstruct the original signal
Nyquist-Shannon Sampling Theorem: A signal whose frequencies do not exceed the Nyquist Limit (i.e. bandlimited to Nyquist Limit) can be reconstructed exactly from samples.
Anti-aliasing
Filters / Convolutions
Low Pass Filter: box, Gaussian - blur in essential
High Pass Filter: edge detection in essential
Antialiasing: super sampling, post-processing, temporal, or deep learning
Texture
Texture Looking Up
Texture lookup: f(x,y,z) -> (u,v) -> (0,1)
Addressing Mode: repeat, clamp, clamp to edge, mirror
Texture Sampling
Magnification: interpolation (nearest, bilinear, bicubic)
Minification: mipmap (guassian blurs of the original maps)
Spatial Data Structure
Geometry Queries
Nearest Point: on a point, on a line, on a segment, on a triangle, on a 3D triangle, on a triangular mesh.
Intersection: ray-sphere, ray-plane, ray-triangle, ray-mesh
Spatial Acceleration Data Structure
Bounding Boxes: AABB, Ray-AABB intersection
Bounding Volume Hierarchy: hierarchical bounding boxes.
BVH Partitioning: Octree, KD Tree
Radiometry
Physics
Photon Energy: Q [J]
Radient Flux: Φ = dQ / dt [J/s = W]
Irradiance: E(p) = dΦ / dA [W / m^2]
Radiance: L(p, ω) = dE / (dω cosθ) [W/ (m^2 · sr)]
BRDF
Bidirectional Reflectance Distributed Function: fr(p, ω_i → ω_o), ratio of reflected radiance and incident irradiance.
Light Transport Equation / Rendering Equation (LTE, RE)
Lo(p, ω_o) = Le(p, ω_o) + ∫_{H^2} fr(p, ω_i → ω_o)Li(p, ω_i) cos θdω_i
It means that for a query point p and a direction excident from p, ω_o, the radiance consists of 2 parts:
Le(p, ω_o) : its own emittance
∫_{H^2} fr(p, ω_i → ω_o)Li(p, ω_i) cos θdω_i: all incoming radiance reflected by this point, calculated by BRDF
Path Tracing
for each pixel
shoot several rays
for each triangle
if(ray hits triangle)
keep closest hit
For more, see CG Tech Interview #2 Physically Based Rendering.
Comments