-
User Story
-
Resolution: Unresolved
-
P1: Critical
-
None
-
6.11
-
None
As a user, I want to be able to have the ability to customize the rendering pipeline. Qt Quick 3D provides many ways to customize the rendering pipeline, but this is mostly suggestions to a black box. Instead there should be an alternative way to customize how and what is rendered, and to what. And while even this exists already in Qt Quick 3D, those API’s are C++ only and require internal engine knowledge to use. Instead it should be possible to define custom rendering pipelines in Qt Quick via QML or QDS, which should be in line with the goals of Qt Quick 3D.
Much like it is possible to define multiple passes in post processing effects, it should also be possible to define additional or alternative passes during the “main” rendering of a scene. That is the parts of the rendering that involve the actual scene renderable. It is possible todo this today in C++ with the RendererExtensions API, but it should also be possible to do so more generically in QML. Conceptually this is similar to the Frame Graph in Qt 3D, but a bit more high level. On the backend however we do indeed built a frame graph to determine the render strategy.
Roughly speaking, the API would define an alternative way to renderer a scene. So for example, I may want to collect all opaque items in a scene, sort them by distance from furthest to nearest, and then render each of them with an alternative material (say rendering each with a solid random color without shading). This would render to a texture, which I could then use in a post processing effect on the “main” pass of the View3D. In this post processing effect, I can mix the two textures such that each opaque item is “highlighted” with an alternative color. I could alternatively use this texture in a custom picking algorithm, which could be useful for providing accurate picking for models that are only animated at runtime (rigged animations).
We should provide ready-made building blocks for defining a custom render pass in QML/QDS, such that building a tree of these primitives leads to a frame graph that results in passes.
This should be considered an alternative render pipeline, not a customization of the built in pipeline. It’s an alternative, not a replacement. The ease of use in Qt Quick 3D comes from our control of the default forward+ renderer, and us being able to determine what the “default” passes look like by how the API is used. Defining your own pipeline means that it will be used instead of the default, and render to it’s own render target, or be used in conjunction with the default pipeline (with the resulting textures/buffers generated being accessible from custom materials or post processing effects via the usage of Texture).
User Passes (more detailed notes)
The goal is that users should be able to define their own passes that operate on scene data the same way that the built-in passes do.
Some important concepts
- Render Targets (where)
- Where does the pass render to
- An additional texture/buffer
- The main render target (window/texture)
- Before or after the color buffer main passes
- Multi-render targets (MRT)?
- The pass is writing to more than one texture/buffer
- Where does the pass render to
- Renderables (what)
- Standalone mode
- Doesn’t use information from the scene.
- Scene Content
- Models
- Each subset-material combo is usually a renderable
- Item2Ds (inline QML items)
- Layers/Tags
- Renderables can be filtered using layers/masks
- Models
- Standalone mode
- Material Overrides (how)
- No Override (Use the same material as in the main pass)
- Replace Materials
- Replace each or all materials with new material(s)
- This gives the most power but comes with a big disadvantage of potentially breaking behavior
- For example, replacing a CustomMaterial with side effects in a vertex shader with one without that same behavior will change the geometry rendered.
- Augment Materials
- Internally all materials are DefaultMaterials which are shaders generated by the Default Material Shader Generator. So one material typically will produce many shaders:
- Depth Prepass
- Ambient Occlusion
- ShadowMap (Ortho)
- ShadowMap (Perspective)
- Lighting/Color
- Augmented Materials allow for preprocessor defines to be set to determine which generated code is present, as well as a block of code that will be used to determine the output of gl_FragColor.
- Additional uniform values exposed
- How would you set a unique uniform value for each model or renderable?
- Pipeline state changes
- Internally all materials are DefaultMaterials which are shaders generated by the Default Material Shader Generator. So one material typically will produce many shaders:
Use cases
The following are example use cases I expect a user to be able to implement using this API.
Normal-Roughness Pass
Generate a Normal-Roughness buffer for use in post-processing effects like SSGI and SSR
Screen Space Global Illumination techniques typically require additional information about the content of the scene than just the color and depth buffers. With forward renderers, this information is not readily available since they are typically calculated and discard in the lighting pass. So a user may want to generate a texture containing calculated information about world-space normals as well as roughness.
RGBA16F Format Texture (same size as color texture)
RGB = WorldSpaceNormal value
A = material roughness
So in this case the Render target is 4 component floating point Texture (RGBA16F or RGBA32F). Theoretically the user could also use a differnt format and encode normal data and roughness value into the available space, but for this case assume the proposed format.
The renderables should likely be all opaque renderables with lighting. It's possible you would also want transparent renderables, but lets assume we want to filter for only lit and opaque renderables for this exercise. Item2D's count as unlit, so they are not present.
For Material Overrides, we will want to augment not replace the existing materials. We would want to the existing material to generate a shader that calculates it's interpolated normal and roughness values, but then instead of using them to calculate the lighting in the fragment shader, the output of the fragment shader should be altered to write the world space normal value to RGB and the roughness value to A.
In addition, we would want to make adjustments to the pipeline state to make sure that we:
- Require a depth pre-pass
- map the resulting depth pre-pass texture as our depth read source
- enable depth read
- disable depth write
This enables correct/efficient rendering to the pass, but makes sure that we don't clobber the existing depth buffer data.
- Some things to consider
- If we were to use this existing depth pre-pass buffer, then we would probably want to use the same filtering rules
- Otherwise maybe you would want to instead attach a transient depth buffer and enable both read/write just for this pass (if the renderables differs too much from the original)
Color Picking Pass
Generate a pickable model index buffer for use in picking
In this case the idea is that a subset of Models in the scene are tagged as pickable. With the built-in picking system, we calculate ray intersection against the triangles of a models geometry. This can be done synchronously and efficiently on the CPU, but has the downside that it only can test against the geometry as it exists CPU side. So any geometry that is transformed by a vertex shader will not have accurate picking (for example models with rigged animations). So if you were making an FPS and trying to determine if a rigged player character was hit by a bullet or not, the built in system can not be accurate. So instead for a frame where we want accurate picking we should schedule a pass where we render the subset of pickable Models to a texture where each model gets a unique, but mapped color value. Then we can check the color value at the view-space position of the ray, then using the look up table for that color value determine which model was hit. This will give accurate picking results at the loss of syncronousity and information (we can know which model was hit, but not where/how etc).
So in this case the Render target is ideally a single channel unsigned int format texture. If that doesn't exist the lowest channel count integer based texture will do, and size will likely be the same size as the main-pass color buffer, but smaller intervals (with the same aspect ratio) will work at the loss of some precision.
The renderables will be all pickable models. This can either be using the existing "pickable" property on models, or using a custom layer/tag. For this exercise I choose to use a custom layer tag since this is a special case of picking, and so only a subset of pickable items would need to use it anyway.
For material overrides, we again want to augment existing materials rather than replace them. This usecase shows the necessity of augmented materials because it's the modification of geometry in vertex shaders that breaks the built-in pickings accuracy in the first place. So if we were to replace the material, it would likely make the picking inaccurate again. Instead we want to have an augmented material where the lighting code is disabled, and replaced with writing a single int value to glFragOutput. In this case we would also need to add an additional uniform value to this augmented material, and for each Model (not renderable) provide a unique value to set on this uniform. This value would be an index we can use to see which model was the one rendered at a particular fragment.
For the time being, I don't care how this texture gets used in practice, just how it's created. (that can be revisited I guess)
Screen Texture, user edition
Create the screen texture, yourself.
This is a contrived example, but the idea of a screen texture is to work around the fact that you can't (or at least shouldn't) write to the same texture you're reading from by creating another texture with the content you were trying to read. The place where this comes up in the renderer now is when rendering a material with refraction, since you need to manually blend with the content behind the item being renderer. The built in screen texture renders the background + all opaque items.
The render target will be a texture the same format and size as the color buffer. The size can be some multiple of the original to save resources, but that will affect the quality for anyone using it. The built in one currently creates a rendertarget without multisampling enabled even if the original color buffer is using multisampling, so thats at least something to consider (mostly because of the extra steps necessary when you want to use that texture later).
The renderables will be all opaque renderables. (Does The built-in screen texture include Item2D's?)
One extra issue with this otherwise "easy" usecase is that a screen texture needs to have the same "background" or "clear" mode that the clear mode has. This is challenging because how this is done depends on how the SceneEnvironment. If it's a clear style mode, then a color is passed to beginPass to fill the texture with that color. If the mode is one that is a skybox though, it will use a special pass in the engine which draws the sky either from the lightProbe or a cubeMap. So this usecase would require both a way to explicitly state what the clearColor is, as well as a way to potentially call internal passes with another render target. For example, the SkyboxPass, or SkyboxCubeMapPass.
For Material Overrides, this one is easy, since you don't want to override, you just want to use the original material.
Layer Masks
Create a mask/stencil from a subset of renderables
In this case, you want to fill a single channel value when some subset of rendables are present. This can be used by effects or other materials to perform special behaviors on overlap. For example, say you don't want to perform a particular effect on parts of the scene, so those parts get marked as part of a layer, and then the texture can be used to mask out the effect.
The render target would be a single channel texture format. It could be boolean if you want it on or off, but at least a some bytes would enable a gradient mask.
The renderables would be everything in a particular layer of interest.
The material override would be the material without lighting, outputing on the alpha value as the color.
Unlit Items Layer (+Item2D's)
A layer that gets rendered after the lit scene's been post-proceseed. The unlit items are rendered using the depth buffer from the original lit pass.
The engine has the ability to render unlit items, which can be used as UI elements or gizmos within the 3D scene. These elements though being unlit, should not participate in any indirect lighting or potentially even post processing effects, and in the case of Item2D's, they should not be tone-mapped if requested (since they are already sRGB when rendered). Currently the main pass always renders all renderables, lit or unlit, which can cause issues when doing some indirect lighting approximations in screen space in a post processing effect. Instead, we can place all these unlit renderables into their own layer which doesn't get rendered in the main pass, but instead is deferred until after the post-processing effects have run. So it looks like this:
pre-passes -> main-passes -> Effect1 -> Effect2 -> unlit-items-pass
The render target is the output of the last effect. This is normally the View3D's actual render target, so either a texture or a window. This one may be challenging because there is whole logic to how this comes to be, but basically we want to render last.
Renderables are items that are marked as unlit, and also including all Item2D's (Inline QML)
There is no material override, just use the original material. With maybe the exception that the tone mapping should be performed for non-item2d renderables (since this they wont be tonemapped again after this point).
There is a pipeline change though, in the sense that we need to make sure that we use the depth buffer from the original scene render. This likely means forcing a depth-prePass and using that depth buffer as our depth read source. This is important because if the unlit renderable is obscured by something already in the scene, it should still be obscured. This does have implications for transparent content though!
Other Things
MRT
One of the things that has been glossed over in this approach is that "passes" do not always just result in a single texture render target. So I mentioned at the begining MRT, which refers to setting up a pass that writes to multiple render targets. Its not entirely clear how we would set this up from QML perspective at this point. The way Passes work in Effects, as long as there is still one explicit output texture at the end of and Effect, it would sometimes be useful to potentially target multiple output textures in a single intermediate pass.
Compute passes
Currently with the Pass API in Effects, we only support vertex and fragment shaders. It would be quite useful to support alternatively using a compute shader via a compute pass in passes. In the Effects API this is already a good fit, since we can just as easily opperate on images/buffers via compute passes. For the proposed user pass API described above though, it doesn't make sense to use a compute shader for a pass that uses renderables because all of the infrastructure for those assume you are using the traditional graphics pipeline. But certain intermediate steps of a pass could however use compute shaders, so its something that should be considered. (so called stand-alone passes)
Non-texture passes
Another thing to consider is that sometimes a pass could be generating data. For example we currently don't have any way to opperate on an instance buffer from the GPU side, and an interesting user pass would be modifying the instance buffer on the GPU using a compute pass.
Even further out of scope for now, but something worth considering is passes that help us prepare what will be rendered in the main pass(s).
For Gerrit Dashboard: QTBUG-129739 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
664510,18 | WIP: User Render Pass API Experiments | dev | qt/qtquick3d | Status: NEW | -2 | 0 |