I had a few bugs from homework 5 that led to me losing a significant amount of points and took quite a long time to debug. I combed through my code thoroughly, cleaned it up a little, and in the process managed to find all 3-4 small bugs that I had in my code (some of which were silly typos).
I implemented a bounding volume hierarchy structure (an AABB tree), in which I created a bounding box "Shape" object in which I could store my actual shapes. Each shape has its own creation method for bounding boxes. The structure allows the ray tracer to render scene5.test in
12 seconds (after adding all the extra effects for this homework, which I did not have time to optimize) and scene7.test in 27 seconds.
Shooting 9 rays per pixel instead of just one smoothed out all shapes in the images I rendered. This was done in the main rendering loop.
I then went on to implement soft shadows by first creating an area light. Unlike a point or directional light, the area light actually has volume and shape. In the case shown, the area light is a square up top. The light itself is of course white and the shadows it creates are soft, because areas of shapes may hit a fraction of the entire area light. To generate many rays to test this, I simply used rand() and created an offset. I could then average the colors returned by tracing all of the random rays. Of course, I can also choose how many rays I want to shoot. In the images shown you can see soft shadows with only 1 ray and with 50 rays (861 s), the former being very grainy. The last image was me not knowing how to shoot rays correctly for quite a while.
A similar method of generating random rays was used for glossy reflection. Depending on a glossy index passed into the parser, I limited my directional ray generation to a small area. This effect simply blurs the reflection on the object by averaging the colors each ray traced returns. I tried to do something similar for refractive objects, but was unable to test because it simply took too long to render. For the second image, you can see that the big egg in the back is glossy.
Refraction was a bit trickier, because I couldn't really tell if things were working or not. This feature took quite a while to debug. In the images shown I am using a refractive index of 1.01 and 2.46 (diamond). Based on the refractive indices of the air (1.0) and of the object (that I arbitrarily chose), I was able to bend the eye ray as it entered and left the refractive shape. Depending on the refractive index, this could either make the object transparent or make it act as a lens. Putting two refractive objects together simply compounds the distortion. The last image shown is a refractive sphere inside another refractive sphere (though I'm not sure if this is the correct result).
Depth of Field
For depth of field, I added an extra parameter, lenssize, to my parser. Below is an image with lenssize of 0.5. As you can see, the camera is focused on the cyan ball. For the image I used 10 samples per point on the lens. Basically, these a ray was shot for each of the 10 samples for a pixel and then the colors returned were averaged.
For ambient occlusion, lighting is disregarded and only each object's proximity from another is taken into account. I basically shot rays from each point my eye rays intersected with randomly within a hemisphere shape (based off the normal of the intersection. If the ray did not intersect another object, then I would make it proportionally whiter. I limited the distance my rays could travel in order to keep the shading more realistic. Images show 100 samples per point and 512 samples per point. The first image was before I realized how to shoot rays randomly in a hemispherical shape. You can see that some of the shading is already correct even there.
Added Parser Support
sphere x y z r indexOfRefraction glossiness tri indexOfRefraction p1 p2 p3... occlude lenssize dbl area x y z width height up r g b