Code so Far
Here is the code for this section:
Rendering a Scene
Up to now I have just been generating a cube, which is good and all, but what I really want is to render a scene which can show off the ray tracer. I also want to render a scene that will allow me to test the accuracy of my render. To this end, I will be rendering the Cornell box. The Cornell box is commonly used 3d test model. It comes in different configurations but the models have corresponding photographs associated with them which allows us to compare the accuracy of our render to real life.
|
Sample Cornell Box |
The image above is one such example of a Cornell box. Cornell boxes usually contain a red wall on the left, a green wall on the right, and a white wall on the back, floor, and ceiling. Objects are placed within the box and an area light illuminates the box from the top.
|
Cornell Box in Blender |
For my project I will be using the model found from
here. The model is shown above. The corresponding object hierarchy in Blender looks like this:
|
Blender Object Hierarchy of a Cornell Box |
My objective is to import any model. This includes models that have a hierarchy, textures, or lights. I want to be able to load in all this information into my application and have it render without any manual input.
To begin, I first need a tool that can import model files. I also don't want to be tightly coupled to this tool so I will also need to create an abstraction layer. I need to create data structures for containing mesh, texture, and light information. I then need to use that information to make DirectX 12 calls to render the meshes. For organization sake, I also want to decouple the rendering of the meshes from the mesh data itself. Here is an abstract image of what this looks like:
|
Abstract Asset Importing Architecture |
The tool I decided to use for asset importing is Assimp. Assimp is an API for loading in various 3d modeling file types. It is a widely known and used API. Assimp supports loading in model hierarchies and supports mesh and material instancing. This is a simplistic illustration of Assimp's class structure.
|
https://learnopengl.com/Model-Loading/Assimp |
Additional things to note because I couldn't find a diagram showing this extra information, but lights are listed in the Scene structure. Certain Nodes may not contain meshes but could be lights. In order to determine if the node is a light, you have to look at the node's name. Node's also have transformation matrices relative to their parent. Lastly, materials have a lot of properties associated with them that can be queried. An example of a property would be diffuse color or roughness. Because of the non-standard nature of materials, there is not a comprehensive list of all property names.
Using the nature of Assimp's structure, and my requirements laid out above this was the architecture design I came up with:
|
Asset Importing Architecture |
The asset importer will create a Game Object hierarchy based on the data it received from Assimp. For each node in Assimp, a Game Object will be created. In Assimp, some nodes may not contain any geometry. For example, with lights. If the node does however contain geometry a Mesh Renderer will be created and attached to the Game Object associated with that node. The Mesh Renderer will have a reference to the Meshes that make up that Node. The mesh renderer's responsibility is to contain all logic for rendering that geometry using DirectX. It also encapsulates all the buffers used for rendering.
Once the hierarchy is setup how do we render for rasterization and raytracing?
For rasterization, in the render loop of the game we will loop through all mesh rendering components. It will then grab the vertex buffer views and index buffer views, and for each one will make a draw call.
For raytracing, in order for us to render multiple objects there is more to it. The hit group we created in the shader binding table in the previous post had two constant buffer views. One was for the vertices of the cube, and one for the indices for the cube. Now that we are switching to render multiple object, how can we pass in the vertices and indices of the other geometry in the scene? The answer is, we need to create multiple hit groups / shader records in the shader binding table. In fact, for each mesh, we need to to create a hit group.
|
Ray Raying Pipeline https://www.willusher.io/graphics/2019/11/20/the-sbt-three-ways |
Let's take a step back and relook at the how the raytracing pipeline works. In the pipeline, after a hit is detected by the acceleration structure traversal, the closest hit shader gets called. The parameters that it passes into the hit shader (in this case vertex and index buffers) will be the parameters from a hit group specified in the shader binding table. The question then is, what hit group does it use in the shader binding table?
Let's further explain the importance of this question with an example. Suppose there are two meshes in the Scene, Object 1, and Object 2, and the acceleration structural traversal hits object 2. Let's also say that there is two hit groups in the shader binding table. The first hit group contains the vertices and indices for object 1. The second hit group contains the vertices and indices of object 2. How does the GPU know that it should select hit group 2 for object 2?
The way you do this, is you assign the hit group index in each top level acceleration structure. In addition though, you can have multiple ray trace passes. When you have them, you organize the the hit groups of each pass next to each other. An example of this would be a pass for rendering the basic color, and another pass for rendering shadows.
|
Two Pass Ray Tracing Hit Groups Assigned to Meshes https://www.willusher.io/graphics/2019/11/20/the-sbt-three-ways
|
The image above shows such an example. For each mesh, there is two hit groups side by side each other. One for a primary ray trace, and another for an occlusion ray trace. To indicate this structure you have to specify in the function call to TraceRay in the RayGen shader what type of ray you are generating (RayContributionToHitGroupIndex), and the number of types there are (MultiplierForGeometryContributionToHitGroupIndex). In my case currently, I am only using 1 type so I am using 1, and 0 respectively.
After adding a hit group per mesh, assigning the acceleration structure's to hit groups, and updating the TraceRay method I could now render a scene using ray tracing.
There was once caveat. When I was importing the meshes, the meshes did not have vertex colors. Instead they were using a material with a diffuse color. Before I supported materials, I just randomized the colors at each vertex if there was no vertex color. This gave me results that allowed me to visualize the geometry clearly. This is the result I achieved:
To demonstrate importing more complex geometry I was able to render the infamous Suzanne model from Blender:
Here is a monkey in a Cornell box, rotating.
And after adding support for importing material information and diffuse colors I was able to produce this:
Right now of course the Cornell box looks really bad and it is hard to determine the shape of the geometry, but that is because it has no lighting. In the next post, I will continue to improve upon the Cornell box rendering by adding lighting and shadow.
Up NextIn the next post, I will start adding lights and shadows to the scene.
Comments
Post a Comment