Posts

Denoising

Image
  Code so Far Here is the  code  for this section Denoising Last time I implemented a path tracer that used importance sampling for optimization. This week, to wrap up this project I added a denoiser. When it comes to denoising there is a couple approaches. They are described here  but they are filtering technique, machine learning techniques, and sampling techniques. Filtering techniques are cheap but blur the image. Machine learning uses autoencoders to reconstruct images from noisy images. Sampling techniques use spatial and temporal data to denoise the image. Because, I am not denoising the image per frame I wouldn't be able to take advantage of spatial temporal solutions like Nvidia's real time denoiser (NRD). NRD would have been a better solution than machine learning however due to its performance. For my program I opted for a machine learning encoder. There were multiple to choose from the most popular were Nvidia's Optix, and Intel's Open Image Denoise (OIDN).

Switch to PBR Cont.

Image
  Code so Far Here is the  code  for this section Switch to Physically Based Rendering (PBR) Cont. Last time I began implementing the Monte Carlo estimator on the ray tracer, but did not complete it. This is what the results looked like from last week. After realizing my seed was blatantly wrong for my random function I was quickly able to achieve closer results to what I was expecting: The image above uses 50,000 samples per pixel with 2 bounces. This runs at 1fps. With 100,000 samples per pixel I am able to get an image with almost no noise.  100,000 Samples Above is the render with 100,000 samples. Ever with this many samples I am still getting noise. Additionally if you observe the noise in both the 50,000 samples and 100,000 samples, the noise is not uniform. This indicates that either my function that maps a 2d sample over a sphere is incorrect or the generation of the random 2d sample itself is incorrect. After debugging this I was able to observe it was the random seed again an

Switch to PBR

Image
  Code so Far Here is the  code  for this section Switch to Physically Based Rendering (PBR) The scene is setup with basic lighting, and it is running pretty fast (thousands of fps), but we aren't using the power of raytracing other than generating shadows. To really show off my ray tracer I want to support global illumination. What I wanted to focus on this week was switching over to a more realistic model of how light works. For this, I needed to implement a model that estimates the rendering equation. For reference, the rendering equation is , and the part under the integral is the equation that needs to be estimated. Monte Carlo  My first attempt at estimating will be to implement a Monte Carlo estimator, Fn. What the Monte Carlo algorithm will do is for every surface position hit by a ray, a new ray will get fired in a random direction. The process of firing rays will continue recursively until a maximum number of hits is reached. I will performance this process multiple times

Simple Lighting

Image
Code so Far Here is the  code  for this section Rendering a Scene Now that I can render a scene, and have the raytracing pipeline setup, I want to make the raytracing scene look better. In rasterization there is diffuse light, specular lighting, etc., but how do you do proper shading for a ray tracer? To answer this question I picked up the book  Physically Based Rendering, Fourth Edition , by Matt Phar et al. This book goes in depth in the theory and implementation of how to achieve photorealistic renderers using raytracing. A critical understanding of how light actually works is necessary (at least at a macro level) in order to replicate its behavior. When a light source strikes an object it's light can either be reflected, refracted, or scattered in any direction due to subsurface scattering. The probability of each direction can be described by a probability distribution called bidirection scattering distribution function (BSDF).  Surface scattering example (Matt Phar) In the e

Rendering a Scene

Image
Code so Far Here is the  code  for this section: Rendering a Scene Up to now I have just been generating a cube, which is good and all, but what I really want is to render a scene which can show off the ray tracer. I also want to render a scene that will allow me to test the accuracy of my render. To this end, I will be rendering the Cornell box. The Cornell box is commonly used 3d test model. It comes in different configurations but the models have corresponding photographs associated with them which allows us to compare the accuracy of our render to real life. Sample Cornell Box The image above is one such example of a Cornell box. Cornell boxes usually contain a red wall on the left, a green wall on the right, and a white wall on the back, floor, and ceiling. Objects are placed within the box and an area light illuminates the box from the top. Cornell Box in Blender For my project I will be using the model found from here . The model is shown above. The corresponding object hierarch

Enabling Raytracing - Part 2

Image
Code so Far Here is the code  for this section: Enabling Raytracing - Part 2 This week I added the shader binding table for raytracing. The shader binding table is where the data is bound to root parameters. This is contrary to rasterization where the parameters are bound through command list calls such as SetGraphicsRootConstantBufferView . One that was done I updated the render loop to support raytracing. This included transitioning the output buffer to unordered access, setting the pipeline state, and calling the DispatchRays method using the  D3D12_DISPATCH_RAYS_DESC which contains info about how many rays to generate, and the shader binding table. For my demo, one ray is generated for each pixel. After calling DispatchRays , our output image is written to. I then copy the image to the render target. The shaders were also updated to actually generate rays, and output the color of the cube. The RayGen shader was updated to call the TraceRay. The ClosestHit shader was modified to ca

Enabling Raytracing - Part 1

Image
Code so Far Here is the  code  after this section Enabling Raytracing - Part 1 Now that we have created a basic render using rasterization, the next goal is to create the same render using ray tracing, and a mechanism to switch between the two rendering modes. For this I followed the  NVidia raytracing tutorial  which is split into two parts. In order to switch between the the two modes I needed to bind keyboard events from the Window to the Game class. Just like how the WM_Paint method is subscribed to currently, I added subscription callback mechanism for WM_KEYDOWN, and WM_KEYUP Window's events that the game class now subscribes to.  On space bar, the window is configured to switch between raytracing and rasterization. For now, since raytracing is not implemented, if the user presses space bar, it goes between using rasterization, and rendering nothing. This was updated within the render loop of the game. After making these updates, this is the current state of the program: Afte