Partner: Thanh Hai Mai
In this assignment, we implemented the Ray Tracer in C++ from scratch. We used FreeImage library to save image into PNG files.
- Render transformed spheres, ellipsoids and triangles as instructed
- Ray trace recursively (with reflections) (to turn off reflection, set "reflection 0.0" in the scene)
- Hard shadows
- Acceleration: Kd tree (we can render "thousand balls" scene in less than 8s and other basic test scenes in less than 10s)
- SOFT SHADOWS with SPHERE LIGHTS (see images "test5_SphereLight_20.png" to "test5_SphereLight_400.png" to see different sampling rates)
- MONTE CARLO ANTIALIASING with different sampling rates (see images "test4_RefractiveIndex2_Monte_Carlo_Antialiasing.png" and some other file with "Monte_Carlo" extension) (The image on the right is the original)
- DEPTH OF FIELD with different sampling rate, focal length and aperture size (see "test5_DOF_FL1" to "test_DOF_FL10" for different focal length)
- REFRACTION (specified by refraction and refractionIndex in the scene file) (see "test4" pictures to see different refraction indices)
- GLOSSY (specified by isGlossy, glossyDegreeOfBlur and glossySamplingRate) (see "test7" pictures to see different degree of blur)
- TEXTURE on Sphere (specified by texturedSphere in the Scene) (see "test8" pictures to see different spheres with different textures)
- Snowman scene: See above
-----Other images demonstrates different examples and test scenes
- In this assignment, we read the scene file and we keep track of the current transformation matrix stack.
- Each primitive will hold a worldToObj and objToWorld transformation.
- We create a screen [-1, 1, -1, 1] and scale it according to the ratio found by using the resolution and the fov specified in the scene test.
- For each pixel, we shoot a ray from the camera which is assumed to be at the origin, then we use gluLookAt() to transform the pixel found on the virtual screen back to the world space. With the new point, we can find the direction of the ray shot from the camera through each pixel to the world space
- With the ray, we loop through the whole vector that stores all the primitives to find whether the ray intersects with any primitive.
- To find intersection, we have to transform the ray from the world space to the object's space to find the intersection point and transform that point and the normal at the intersection back into world space
- If there is no intersection, return the background color
- If there is an intersection, we will create rays from the intersection to all the lights in the scene. If a ray intersect with other primitive, then we shadow, otherwise, calculate the light color at the intersection using Lambert and Phong.
- Then we create a REFLECTION ray and trace that ray again until the depth is >= maxdepth
***ABOUT EXTRA CREDITS:
- SOFT SHADOW: We created a SphereLight class. For each intersection, we find some random rays shot from the intersection to the sphere light. The number of rays is defined by the sampling rate. Then we use Monte Carlo integration to find the average color at the intersection.
- MONTE CARLO ANTIALIASING: We created a SuperSampling class for antialiasing. The mSamplingRate in the SuperSampling class will determine the sampling rate. For each pixel, we randomly choose #mSamplingRate positions to find random rays. Then we use MONTE CARLO integration to find the average color
for each pixel.
- DEPTH OF FIELD: We added focal length, aperture size and sampling rate variables into the Camera class.
+ If isDepthOfField is specified, the SuperSampling will shoot #DOFSamplingRate rays from the camera to the pixels spanned by the aperture.
+ The focal length will determine the distance between the screen and the focal plane. Those rays above will intersect with the focal plane at some intersection A.
+ With the primary ray, we can find the focal point F on the focal plane.
+ With F and A, we can determine the ray into world space.
+ Do normal ray tracing and find the color returned by each ray
+ Use Monte Carlo integration to find the average colors
- REFRACTION: We added a function to calculate the transmitted ray if there is a refraction variable specified. Using Snell's law, we can calculate the transmitted ray if we have incident ray, the refractive indices of 2 materials. With new ray, do ray trace recursively and add the resulting color into our current color
- GLOSSY: We added isGlossy into BRDF to flag whether a primitive is glossy or not. Then for each reflected ray, we generate #glossySamplingRate random perturbed reflected rays and find the average reflected color at the intersection.
+ We need a plane that is perpendicular to the reflected ray. Then find a random point on a square (size is the degree of blur) and find the direction of the perturbed reflected rays.
- TEXTURE on SPHERE: We added Texture class that stores the texture file. For each point on the sphere, we convert it into UV coordinates. Then we use the UV coordinates to look up the color at that position from the texture file.
- PARALLELISM: We use OpenMP to speed up our code and utilize all cores to make it faster
Milestone: We implemented all classes needed for Ray Tracer and loader to load from Scene.test file. We can trace and display a primitive with the Camera at the origin and the screen is 1 unit in front of the Camera. There is no shading yet. Here are a few geometric primitives that we can display: