The main goal is to implement a GPU driven renderer for Godot 4.x. This is a renderer that happens entirely on GPU (no CPU Dispatches during opaque pass).
Additionally, this is a renderer that relies exclusively on raytracing (and a base raster pass aided by raytracing).
It is important to make a note that we dont want to implement a GPU driven renderer similar to that of AAA/Unreal, as example. We want to implement it in a way that allows us to retain full and complete flexibility in the rendering pipeline and in a way that it is simple and easy to maintain.
Roughly, a GPU render pass would work more or less like this:
- Frustum/Occlusion cull depth: This would be done using raytracing on a small screen buffer. As such, throwing a small amount of rays to generate a depth buffer.
- Occlusion cull lists: All objects will be culled against the small depth buffer. Objects that pass are placed in a special list per material.
- Opaque render: Objects are rendered in multiple passes to a G-Buffer (deferred) by executing every shader in the scene together with their specific indirect draw list. This follows the logic of the Rendering Compositor, providing the same flexibility for different kind of effects, stencil, custom buffers, etc.
- Light cull: Lights are culled against the depth buffer to determine which lights that are rendered per pixel and which need shadow.
- Shadow tracing: Shadows are traced using raytracing to the respective pixels on screen.
- GI Pass: Reflections and GI are processed also using raytracing. GI of objects off-screen is done with material textures (material rendered to a low res texture).
- Decal Pass: Decals are rendered into the G-Buffer.
- Volumetric Fog: Volumetric fog is processed like in the current renderer, except that instead of tapping shadow maps, raytracing is used.
- Light Pass: Finally, a pass adding lights is applied and reading the proper shadows.
- Subsurface Scatter Pass: A pass to post-process subsurface scatter must be done after the light pass.
- Alpha Pass: Transparency pass is done at the end. This is done using regular Z-Sorted draw calls, CPU driven.
Q: Why do we use smaller resolution raytracing for occlusion culling and not visibility lists?
A: Visibility lists take away flexibility from the opaque passes. They require rendering objects in specific order, while opaque passes do not.
Q: Do we not have small occluder problem using raytraced occlusion?
A: Yes, but in practice this does not really matter. Roughly more than 99% of scenes work fine.
Q: Why using raytraced shadows only, is it not better to support shadow mapping?
A: We need to check depending on performance, but worst case an hybrid technique can be explored.
The first pass has to be discarding objects based on visibility. Frustum cull will discard objects not visible in the camera. A depth buffer will be created using raytracing, basically used for base occlussion. Objects will be tested against it and also discarded.
Keep in mind that a relatively large depth buffer can be used thanks to raytracing, because the depth buffer can be reprojected from the previous frame and only the places with missing rays can re-cast.
The list of objects passing must be sorted by shader type, in an indirect draw list fashion. Materials for all shaders of a given type will be found in an array, textures will be indices to a large array of textures instead of actual textures.
This can be achievable relatively easily using a compute shader that counts all the objects of each shader type first, then assigns their offset in a large array, then creates the indirect draw lists for each shader.
My thinking here is that, as we will eventually have mesh streaming and these will by default more or less be separated into meshlets anyway (for the purpose of streaming), those could be rendered with a special shader that culls them in more detail (maybe mesh shader), while regular objects (non streamed) go via the regular vertex shader path.
Opaque rendering happens by executing all shaders basically with indirect rendering. Because in the compositor proposal, shaders are assigned to subpasses, it is easy to have a system where the compositor still works despite the GPU driven nature.
Additionally, depending on what a material renders (visibility mask, emission, custom lighting, etc). We can also take advantage to do this and render in multiple render passes to different G-Buffer configurations.
The bindless implementation should be relatively simple to do. Textures can go in a simple:
uniform texture2D textures[MAX_TEXTURES];
For vertex arrays, vertex pulling can be implemented using utextureBuffer for vertex buffers:
uniform utextureBuffer textures[MAX_TEXTURES];
And the vertex format decoded on demand. Vertex pulling of a custom format would probably not be super efficient, but the following needs to be taken into consideration:
- Most meshes will be compressed (meaning they use only one format, hence vertex pulling will be very efficient).
- In a larger game most static meshes would most likely be also streamed anyway and the format will be fixated, the vertex pulling code for most of the vertex buffers should be very efficient too.
With the depth buffer completed, it is possible to do light culling and assigment of visible lights. This can be done using the current clustering code. Alternatively, an unified structure like that of raytracing can be used (possibly hash grid?).
Shadows can be traced in this pass. It is possible not all positional lights with shadows need to be processed every frame, as temporal supersampling can aid improving the performance of this.
We need to check a GI technique, or offer the user different GI techniques based on performance/quality ratio, such as GI 1.0 to full path tracing.
For materials, we should probably just render most materials to a small texture (128x128) and use this information for GI bounces.
As we are using a G-Buffer, a decal pass is probably a lot more optimal to do by just rastering the decals to it.
Volumetric fog should work identically to what we have now, except instead of using shadow mapping, we can just raytrace from the points to test occlusion.
The light pass should be almost the same as in our (future) deferred renderer.
This should work similar to how it works now.
Because we don´t have shadow mapping, alpha pass needs to happen a bit different. The idea here is to use light pre-pass style rendering on the alpha side.
Basically, a 64-bit G-Buffer is used for alpha that looks like this: uint obj_index : 18; uint metallic : 7; uint roughness : 7; rg16 obj_normal; // encoded as octahedral
Added to this, a "incrment_texture" image texture in uint format that is half resolution format.
Alpha is done in two passes. The first pass is objects that are lit, sorted from back to front. Unshaded objects are skipped.
The following code is run in the shader:
// This piece of code ensures that depth buffer writes are rotated across a block of 2x3 pixels.
uvec2 group_coord = gl_fragCoord.xy >> uvec2(1); // group is the block of 2x2 pixels
uvec2 store_coord;
vec2 combined_roughness_metallic;
vec3 combined_normal;
bool store = false;
while (true) {
// Of all active in subgroup, find first broadcast ID.
uvec2 first = subgroupBroadcastFirst(gl_fragCoord.xy);
// Get the group (block of 2x2 pixels) of the first
uvec2 first_group = first >> uvec2(1);
uvec2 store_coord;
if (first == gl_fragCoord.xy) {
// If the first broadcast ID, increment the atomic counter and get the value.
// the store_index is a value from 0 to 3, representing the pixel in the 2x2 block.
uint index = imageAtomicAdd(increment_texture,first_group >> 1 ,1) & 0x3;
store_coord = group + uvec2(index&1,index>>1);
}
// Broadcast the store index
store_coord = subgroupBroadcastFirst(store_coord);
if (first_group == group) {
// If this pixel is part of the group being stored, then only store the relevant one
// and discard the rest. This ensures that every write rotates the pixel in the 2x2 block.
// Combined rm and normal of all 4 pixels
vec3 crm = subgroupAdd(vec3(roughness,metallic,1.0);
combined_roughness_metallic = crm.rg / cr.b;
combined_normal = normalize( subgroupAdd(normal) );
// Determine if the pixel that needs to be written actually is preset (may not be part of the primitive)
bool write_exists = bool(subgroupAdd(uint( gl_fragCoord.xy == store_coord )));
if (write_exists) {
store = store_coord == gl_fragCoord.xy;
} else {
store = first == gl_fragCoord.xy;
}
break;
}
}
// Store G-Buffer
// It is important to _not_ use discard in this shader, to ensure early Z works and gets rid of unwanted writes.
if (store) {
uint store_obj_rough_metallic = object_id;
store_obj_rough_metallic |= clamp(uint(combined_roughness_metallic.r * 127),0,127) << 18;
store_obj_rough_metallic |= clamp(uint(combined_roughness_metallic.g * 127),0,127) << 25;
imageStore(obj_id_metal_roughness_tex, store_coord, store_obj_rough_metallic);
imageStore(normal_tex,store_coord,octahedron_encode(combined_normal));
}
After this, a compute pass is ran computing the lighting of all transparent objects (obj_index == 0 means nothing to do). Light is written as a rgba16f g-buffer. To accelerate the lookups in the next pass, the compute shader will also write for every pixel an u32 containing the following neighbouring info:
Table containing 3 bits values:
x - 2 | x | x + 2 |
---|---|---|
00 - 02 | 03 - 05 | 06 - 08 |
09 - 11 | 12 - 14 | 15 - 17 |
18 - 20 | 21 - 23 | 24 - 26 |
each 3 bits values represents:
0x7: No neighbour
else:
x | x + 1 |
---|---|
0 | 1 |
2 | 3 |
Finally, a second alpha pass is ran again from back to front. For shaded objects, lighting information is searched across the surrounding 36 pixels for objects that match it, then interpolated and multiplied by the albedo.
The algorithm would look somehow like this:
uvec2 base_lookup = gl_fragCoord.xy & (~uvec2(1,1));
uvec2 light_pos = uvec2(0xFFFF,0xFFFF);
for(uint i = 0 ; i < 4; i++) {
uvec2 lookup_pos = base_lookup + uvec2(i&1,(i>>1)&1);
uint obj_id = texelFetch(obj_id_metal_roughness_tex,lookup_pos).x;
if ((obj_id & OBJ_ID_MASK) == current_obj_id) {
light_pos = lookup_pos;
break;
}
}
if (light_pos == uvec2(0xFFFF,0xFFFF)) {
discard; // could not find any info to lookup, discard pixel.
}
uint neighbour_positions = vec4(texelFetch(neighbours,light_pos).rgb,1.0);
vec4 light_accum = vec4(0,0,0,1);
ivec2 neighbour_base = ivec2(base_lookup) - ivec2(1,1);
for(int i=0;i<9;i++) {
uint neighbour = (neighbour_positions >> (i*3))&0x7;
if (neighbour == 0x7) {
continue;
}
ivec2 neighbour_ofs = neighbour_base;
neighbour_ofs.x += (i % 3) * 2 + (neighbour&1)
neighbour_ofs.y += (i / 3) * 2 + (neighbour>>1);
float gauss = gauss_map(length(vec2(neighbour_ofs - gl_fragCoord.xy))); // Use some gauss curve based on distance to pixel.
light_accum += vec4(texelFetch(alpha_light,neighbour_ofs).rgb,gauss);
}
vec3 light = light_accum.rgb / light_accum.a;
light *= albedo;
// Store light with alpha blending.
...
The terminal NIH disease
There seems to be an ongoing theme of "if its in AAA its not suitable", which is ironic given that Godot itself is currently using an AAA technique for culling from 2007. You are using AAA techinques, but about from 15 years ago.
While I understand that NIH is an endemic disease in the "homebrew engine" crowd, usually people have a much milder case where they simply just don't want to use code that wasn't written from scratch for their project but they'll at least do their homework and compare notes, exchange ideas with others, and learn on other people's trial and error.
You sir seem to have Stage IV of this disease which is to dismiss any idea/design that isn't yours and an unwillingness to consider other solutions even when they are presented on a silver platter.
Why this won't work
Ok let me take some time out of my busy schedule to explain why your culling idea won't work.
Lack of Generality (oh the irony)
You yourself state that this is general purpose engine, how is a technique that will have trouble with:
general purpose at all? For now all I see is that the only occluders you'll support must be static triangle meshes. And by static, I mean truly static no movement from frame to frame even as a rigid body.
I thought general purpose could mean like a First Person game with you know, animated characters which could occlude vast portions of the screen?
You reiterate time and time again that this is not an AAA engine, hence don't you think that it would be nice NOT to require artists/users to make specialized simplified "occluder geometries" and have to remember to set them?
I mean you expect ray-tracing to be fast enough to replace a z-prepass, and hoping this will be the case "because there's only a few pixels to trace for".
Given that you want this to run on mobiles and the web and your raytracing software fallback layer will have to be faster than a z-prepass (because thats the only reason not to just do a z-prepass and not occlusion cull)
which will probably necessitate a separate, simpler BLAS per occluder, than the one you'll use for the shadow raytracing. Nice fun way to increase your memory footprint for no reason.
This is before I even point out that your users will sure appreciate having to "bake" occluder BLASes for their occluder mesh, which they'll also appreciate having to make and maintain.
The final nail in the coffin comes from the fact that unless you want to give up on streaming static chunks or like building the TLAS yourself (which you might for a fallback layer with Embree), Vulkan's Acceleration Structure is a black box.
If you want to use as much as a single different BLAS in an otherwise identical TLAS, you'll need to build a new TLAS from scratch (you can't just copy the shadow raytracing TLAS and hotswap the pointers to make it point at different simpler BLASes even if the input BLAS count and AABBs match).
This workload does not scale with resolution, needs to be done every frame, even if you make your culling depth buffer 1x1
2-4x more code to maintain, complexity and fragility
Again the AAA argument, you don't have the resources nor the expertise to maintain complex and duplicated codepaths.
Your design forces (I hope you're aware, but with every reply I loose faith) the renderer to partition the drawing into two distinct stages:
You then need split your renderpass into two, so that you can "save a copy" of the depth buffer before you draw other non-static things into it. Them tiled mobile GPUs are sure gonna love that.
The fun part (as I promised to expand upon) is that as soon as something starts moving (i.e. a door) you'll need to exclude it from the static set and not draw its occluder, because you cannot reproject its depth.
There are only 2 ways to do occlusion culling
Basically it depends on whether you want rasterization or compute:
vkguide
The HW occlusion pixel counter queries are not an option, because only one can be active per drawcall and they are super slow even with conditional rendering (which was invented to save you from GPU->CPU readbacks).
Its suckiness the reason why that Depth Buffer + Occlusion Testing at low res on the CPU was popular at DICE and Crytek.
Mmm the latency!
So anyway, at some point before you even start testing objects for visibility after frustum culling, you'd need to reproject that previous frame partial depth buffer and raytrace the holes, but you can't do that before polling for input.
Then you need to do the occlusion tests, you don't have a shadowpass or anything else to keep the GPU busy in the meantime.
Have fun maintaining and optimizing the code
The divergence on the Reprojection and Raytracing shader is gonna be some next level stuff, I'd personally love to see the Nsight trace of how much time your SM spends idling if you ever get far enough to implementing it.
You'll probably dig yourself into a hole so deep you'll consider doing "poor man's Shader Invocation Reordering" at that point and blog about it as some cool invention.
Nobody (EDIT: fully) tried Depth Reprojection for a good reason
You're probably not the first person to come up with "last frame depth reprojection" as an idea, now think about why nobody went through with it.
EDIT: Yes Assasin's Creed Unity used it, but the used reprojection differently to how you want to use it. First and foremost they still had a rough z-prepass with actual next camera MVP.
Raytracing to "fill gaps" doesn't make the idea special.
Reprojection introduces artefacts - false culling positives
There is simply nothing to reproject, depths are point sampled and you cannot interpolate between them (even with a NEAREST filter). The depth values are defined and valid ONLY for pixel centers from the last frame.
A depth buffer used for culling needs to be conservative (or some people say eager), therefore the depth values for such a depth buffer can only be FARTHER than "ground truth".
No matter if you run a gather (SSR-like) or a scatter (
imageAtomicMax/Min
- then you've really lost your marbles).Don't believe me, try reprojecting the depth buffer formed by static chain linked fence (alpha tested or not does not matter) and call me back.
Essentially every pixel turns into a gap that needs to be raytraced.
This makes no sense from a performance standpoint
The only sane way to reproject is via a gather, which is basically the same process as Screen Space Reflections or Parallax Occlusion Mapping.
Let me remind you that a z-prepass usually takes <1ms and if it takes more than that alternative methods are considered for culling.
You've now taken one of the most insanely expensive post-processes (maybe except for SSAO) and made it your pre-requisite to culling (slow clap).
To put the icing on the cake, a reprojected depth (programmatically written) disables HiZ, so any per-pixel visibility tests (if you use that) done by rasterizing the Occludee's Conservative Bounding Volue get magically many times slower.
Finally there's that whole polling for input, frustum culling, depth reprojection, occlusion culling dependency of the first renderpass which increases your latency.
Now imagine, if only a solution existed that gave you 99% correct visibility and at full resolution in far less time than a z-prepass or this weird SSR?
The Established "AAA" solution is more robust, general and simpler
I gave you a solution thats "essentially free", it gives you all the visibility data in the course of performing work you'd already be performing anyway which is the most robust thing that will ever exist for rasterization, it:
In case it wasn't clear both the "last frame visible" and "disocclusion" sets come from the intersection of the "post-frustum cull" set for the new frame, not the whole scene.
You're arguing against yourself
You really don't have leg to stand on for the decision to use a GBuffer deferred over Visbuffer, it does everything GBuffer does and more for everything from Low Poly or non-PBR 2D isometric casual games with no LoD to 3D PBR open-world games.
If you want to bring up "barycentrics of deformable/tessellated geometry" argument, go head... you simply draw them last and output barycentrics + their derivatives to an auxillary buffer, just as you would do for motion vectors for TAA/motion blur.
Except that you now no longer need the motion-vector aux buffer you'd have with GBuffer deferred because you have the barycentric coordinate and the triangle ID, so for deformables you can run the deform/tessellation logic (or you know, store the transformed vertices) for the previous frame in order to get your motion vector.
The only reason not to use VisBuffer over G-Buffer was the lack of ubiquity of bindless, now that you're going all-in on it, there's really no argument left here.
A. This isn't engine dev, this is renderer dev
B. The entire gist is one massive proof that you should probably study some AA (3rd A intentionally missing) engine dev post mortems, because the infeasibility/inferiority of this whole design will become apparent about half-way through having spent all the resources to make it
Bonus Round: Order Independent Transparency
P.S. For OIT, you really can't beat MLAB4, works kind-of fine without pixel shader interlock on AMD (MLAB2 does not).
P.P.S. Yes you can prime the MLAB with an opaque layer so you're not processing transparent pixels behind opaques.