Part 5: HDR Imaging & Advanced Lighting
For the fifth part of the ray tracer, the following features are implemented:
1. HDR Imaging and Tonemapping
Until now, only PNG format was supported for saving rendered scenes. RGB values obtained by shading calculations are fractional and they are not limited to any range. However, for saving them in PNG format, the values were being clamped to 0-255 range and converted to integers, which results in removing a considerable amount of details from the image. With HDR imaging, RGB values are no longer clamped and they can be represented with floating points by saving the scenes in EXR format. The RGB values obtained from ray tracing can directly be used for HDR images, but they have to be tonemapped to be saved in PNG format. For tonemapping, Photographic Tonemapping by Reinhard (global operator) is implemented.
With the HDR imaging support, the ray tracer is also able to use HDR texture images in EXR format. In order to read HDR images, tinyexr is used, while for writing HDR images OpenCV is used.
2. Advanced Lighting
Until now, the ray tracer was only supporting point and area lights (area lights were already implemented in Part 3). In this version, 3 types of lights are added on top of them. In order to support multiple types of lights, I changed my light implementation and build a base class called Light, where PointLight, AreaLight, DirectionalLight, SpotLight and SphericalDirectionalLight classes are derived from that base class. Light class has 3 virtual methods which are used for calculating wi, for calculating the distance between the light and the hit point, and for calculating the received irradiance.
2.1 Directional Light
Directional lights are defined by their Radiance and Direction. They only provide illumination in a certain direction. They can be used to simulate the sunlight in the scenes. For shadow check, light coming from the directional lights are assumed to be coming from the infinity.
2.2 Spot Light
While point lights or area lights provide illumination in every direction, spot lights provide illumination on a defined angle. They are defined by their Direction, CoverageAngle and FalloffAngle. Inside the falloff angle, the light illuminates with full power. Between the falloff angle and the coverage angle, it still provides illumination but with a radiance decreased by a Falloff Factor calculated based on these angles. The light does not reach outside the coverage angle.
2.3 Spherical Directional (Environment) Light
Environment lights are different from the other types of lights, they are only defined by an HDR image called the environment map. For shadow check, a direction vector is sampled in the upper hemisphere with Uniform Random Rejection Sampling, assuming that the scene is surrounded by a sphere. Based on this direction, the (u,v) texture coordinates are calculated and the color value is fetched from the environment map at (u,v).
Problems & Fixes
One of the first problems I encountered was a race condition issue related to multi-threading in area lights. When I switched to a polymorphic implementation for the lights, wi vector and the sampled point on the area light were calculated once when the related method is called and then stored as a class property. However, due to my implementation of multi-threading, threads were using the wi vector and the sampled point calculated for other threads. This situation resulted with an interesting image which can be seen below. As soon as I realized the problem, I marked those properties as thread_local to ensure that each thread works on a separate copy for those variables.
When I started to implement spherical directional lights, at first the images produced turned out to be completely black no matter what I do. Then, I quickly realized that cosTheta values calculated for diffuse shading was all negative. Tracing this, I found out that I was returning -l for wi vector instead of directly returning the computed l vector. Once this is corrected, the below scene was rendered.
At the beginning, for some reason, keeping the image texture of the environment light as a background texture made sense to me, but of course this idea was not going to work, since the texture image should be used in a spherical way, not as a flat image. After I changed this implementation, the image below was produced. While obtaining the background color from the environment map, I was passing a wrong argument to the method instead of passing the direction vector of the current ray. After correcting this little mistake, the image turned out to be as expected. Also, it should be noted that these images are rendered with single sample, just to observe the results regarding background quickly. That is why the man figure seems very noisy in these images.
In the previous part of the ray tracer, I did not have enough time to implement some of the concepts required by the VeachAjar scene. Luckily, in this part I managed to implement them as well. However, while rendering this scene, I noticed that the dielectric one of the tea pot objects was being rendered as a black object. I remembered that, last time I had started to translate the origins of the shadow rays with epsilon in the direction of the normal vector instead of the shadow ray direction. While changing this, I accidentally applied this to the reflected and refracted rays as well. Once I handle this, dielectric objects also rendered correctly.
Lastly, the most time consuming problem I encountered in this part was about saving EXR images, surprisingly :). For HDR imaging, the rendered scenes were looking all fine when they are saved as PNG, but when I save them as EXR, the images looked just like white noise. I was using tinyexr to deal with EXR images. I was able to read the images without a problem, but no matter what I do, I could not fix the problem related to saving them. At this point, I decided to try another library to save these images and I included OpenCV into the project. With OpenCV, finally I was able to save these HDR images in EXR format easily. I am still not sure whether the problem was about my implementation or about tinyexr being used with Visual Studio 2017. I used the same routine provided in their GitHub repository page, and checked it many times, but I could not find a single wrong thing.
Resulting Images
All the scenes rendered in this part of the ray tracer can be found below. The timing data are obtained when running with 8 threads.
cube_point.png: 322 msecs 575 usecs cube_point_hdr.png: 359 msecs 881 usecs sphere_point_hdr_texture.png: 107 msecs 818 usecs
cornellbox_area.png: 27 secs 921 msecs 323 usecs cube_directional.png: 307 msecs 942 usecs dragon_spot_light_msaa.png: 51 secs 716 msecs 113 usecs
head_env_light.png: 5 mins 9 secs 960 msecs 153 usecs
VeachAjar.png: 2 mins 0 secs 235 msecs 400 usecs |
Comments
Post a Comment