Uploaded image for project: 'Qt'
  1. Qt
  2. QTBUG-120519

Support behind item blur in QtQuick

    XMLWordPrintable

Details

    Description

      I want to implement a new QML type (let's call it FrameBufferBlitter for now) that can dynamically provide the current content present in the QSGRenderTarget for MultiEffect in real-time, i.e., the content behind FrameBufferBlitter. I have attempted to write it, but I have encountered some issues, and I'm unsure how to modify QRhi to achieve it. Here are the problems I've encountered:

      1. QtQuick SceneGraph is designed for staged rendering, and the timing of updates to the QSGTexture provided to MultiEffect through QSGTextureProvider is limited. QSGBatchRenderer updates resources like QSGTexture in prepareRenderPass, which occurs before drawing and uses the old QSGTexture from the previous frame.

      2. QSGTextureProvider notifies MultiEffect of texture updates through the textureChanged signal, but we don't need this. Similar to the QSGLayer, we should detect changes in the source texture when drawing the QSGNode of MultiEffect. We cannot use QSGLayer directly because its updates also occur in the prepareRenderPass stage.

      3. I use QSGRenderNode to implement FrameBufferBlitter, which works in the RhiGles2 backend because it submits rendering commands to OpenGL during beginExternal, and OpenGL has already rendered. We can use glBlitFramebuffer in QSGRenderNode to obtain the content in the specified area. However, in other backends like Vulkan, it uses VkCommandBuffer, and at beginExternal, it hasn't rendered on QRhiRenderTarget, so we cannot get the content in QSGRenderNode::render.

      4. To get content in QSGRenderNode, we need to know precisely where this node will be rendered in QSGRenderNode. Currently, some information is missing, such as viewport settings that also affect the target position being rendered. QSGRenderNode lacks this information, making it unable to accurately determine where it will be rendered.

      To address these issues, it might be necessary to add the following functionalities, but these are just my initial thoughts. If there's a better way, please let me know:

      1. Add QSGContext::createBlitter, similar to QSGLayer, returning a QSGTexture object. Different SG renderers should support this special QSGTexture, and it should support updating texture data to shaders during the render phase (preferably in a later stage to leave time for copying QSGRenderTarget content).

      2. Add more information to QSGRenderNode, such as being able to obtain the current QSGRenderer.

      3. Add QRhiFrameBufferBlitter and QRhiResourceUpdateBatch::blitFrameBuffer(QRhiCommandBuffer cb). We can use it in QSGRenderNode::prepare and save its return object. In QSGRenderNode::render, call QRhiFrameBufferBlitter::blit(QRhiRenderTarget source, QRectF sourceRect, QRhiTexture target, QRectF targetRect, QMatrix4x4 transform) to get the content in the specified QRhiRenderTarget. This operation can add a command to QRhiCommandBuffer and implement the blit operation in various QRhi backends.

      Here is an expected QML example (for this, I have done some verification, but it can only be used on the OpenGL backend: GitHub PR):

      
      Window {
          width: 500
          height: 500
          
          Text {
              anchors.centerIn: parent
              text: "'Ctrl+Q' quit"
              font.pointSize: 40
              color: "white"
              
              SequentialAnimation on rotation {
                  id: ani
                  running: true
                  PauseAnimation { duration: 1500 }
                  NumberAnimation { from: 0; to: 360; duration: 5000; easing.type: Easing.InOutCubic }
                  loops: Animation.Infinite
              }
          }
          
          FrameBufferBlitter {
              id: test
              width: 300
              height: 200
              x: 200
              y: 200
              anchors.centerIn: parent
              // scale: 1.5
              // rotation: -45
              
              MultiEffect {
                  anchors.fill: test
                  source: test.content
                  autoPaddingEnabled: false
                  blurEnabled: true
                  blur: 1.0
                  blurMax: 64
                  saturation: 0.2
              }
          }
      }
      

      I cannot perform off-screen rendering for the content used by MultiEffect. We cannot guarantee that users will always be able to pre-render those contents off-screen. Similar to window compositors like KWin, Gaussian blur effects may be applied anywhere, and we need to maintain this flexibility.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            lagocs Laszlo Agocs
            zccrs JiDe Zhang
            Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:

              Gerrit Reviews

                There are no open Gerrit changes