Enabling Raytracing - Part 2

Code so Far

Here is the code for this section:

Enabling Raytracing - Part 2

This week I added the shader binding table for raytracing. The shader binding table is where the data is bound to root parameters. This is contrary to rasterization where the parameters are bound through command list calls such as SetGraphicsRootConstantBufferView.

One that was done I updated the render loop to support raytracing. This included transitioning the output buffer to unordered access, setting the pipeline state, and calling the DispatchRays method using the D3D12_DISPATCH_RAYS_DESC which contains info about how many rays to generate, and the shader binding table. For my demo, one ray is generated for each pixel. After calling DispatchRays, our output image is written to. I then copy the image to the render target.

The shaders were also updated to actually generate rays, and output the color of the cube. The RayGen shader was updated to call the TraceRay. The ClosestHit shader was modified to calculate the color of the hit based on the vertex colors of the triangle it collided with and the barycentric coordinates passed to it. Additionally, in case the clear color changed on the CPU side, I removed the hard code of the clear color in the Miss shader. I replaced it with a Constant Buffer View (CBV) to a clear color buffer resource.

After all the changes above, this is the current root signatures for all the raytracing shaders:


Raytracing root signatures

For consistency, I also updated the rasterization root signature's MVP to a CBV. This is the current root signatures for rasterization:



Rasterization Root Signature

There was still differences between the rasterization and raytracing outputs because of two reasons: The rasterization was using a projection camera, while the raytracing was generating an orthogonal view. The other reason was the bottom level acceleration structure did not support index buffers. Both of these were updated.

To convert the raytracing to projection, the RayGen shaders needed an inverse camera and projection matrix. I updated the root signature, and bindings to pass these matrices to the RayGen shader. The shader then computes the ray by setting the origin and direction in world space. I do this by multiplying the origin of the camera in projection space (0, 0, 0, 1) by the inverse of the camera. The direction of the ray is computed by taking the direction of the ray in projection space (ndc.x, ndc,y, 1, 0) and multiplying it by the inverse projection and the inverse camera space.

Now we have an identical render for rasterization and raytracing!

However, the cube isn't moving so it isn't appealing visually. Also, even if we got it moving, the acceleration structure containing the MVP would not update with the current code. Thus I added support for updating the top level acceleration structure during the render function if we are rendering using raytracing. I also, updated the code so that the cube rotates on the x and y axis. This is the final result that is generated:


The output for rasterization looks identical to raytracing. If it weren't for the debugging print statements in the console you wouldn't be able to tell which rendering mode you are in. Eventually I want this to change. Raytracing is not currently computing the color based on textures, normal maps, and light sources. But before I start implement more realistic lighting rendering using raytracing, I want to render a whole scene so when I get to rendering more realistic lighting the results are noticeable. Rendering a scene will also give me insight on if the lighting is looking realistic and if my algorithms are properly creating the affects I want. 

Up Next

In the next post, I will start rendering an scene.


Comments

Popular posts from this blog

DXR Demo - Introduction

Simple Lighting