In Symbian, the final stage of the video decoding pipeline (namely the MDF post-processor) writes its output to a graphics surface. This surface may or may not be CPU-mappable; when using hardware codecs the surface is typically not accessible to the CPU.
There are three possible ways in which the decoded video content may be consumed:
- The graphics surface can be bound as the background surface to a window. Each time the post-processor writes a new frame to the surface, the compositor is notified, and triggers a composition pass to make the new frame visible on the display.
- A handle to the frame could be passed to the client application, and the application then makes the frame visible by rendering the video content into a window-bound surface. In order to do this with reasonable performance, the rendering would need to be done via a hardware accelerator, which in practice means that the video frame must be exposed to the client via an EGL handle of some type.
- A handle to the frame could be passed to the client application; rather than rendering the content, the application instead performs some analysis of it - for example as part of an Augmented Reality application. If this analysis is to run on the CPU, clearly the video content needs to be CPU-mappable; if on the other hand it runs on the GPU, exposing the video frame via an EGL handle would suffice.
QtMultimediaKit provides APIs which can potentially support all of the above use cases. Specifically, a given backend may choose to implement any of the following APIs:
- QVideoWidgetControl: the backend creates a QWidget into which video is rendered. This can support use case 1 above.
- QVideoWindowControl: the backend is provided with a native window handle and a target rectangle specifying the area into which it should render. This can support use case 1 above.
- QVideoRendererControl: the backend calls back to the client each time a new frame is available, providing each frame using the QVideoFrame abstraction. This is capable of supporting various types of video handle, from a raw pointer to CPU-accessible memory, through various EGL handles, to platform-specific handles. It can therefore support use cases 2 and 3 above.
The Symbian QtMultimediaKit backend currently supports only QVideoWidgetControl, and it is this API which is used by the Symbian QGraphicsVideoItem implementation in order to render video to a QGraphicsView. This path is also used by the QML Video and Camera elements.
There are several drawbacks with the "widget" rendering path, principally:
- The compositor only supports a limited set of transformations - namely, rotation in 90 degree steps, and linear scaling. This means that some of the transformations offered by the QGraphicsVideoItem API (free rotation, shear) cannot be supported on Symbian.
- The fact that video is rendered into a separate native window means that, when the position or size of the item changes, two unsynchronized updates occur: (a) the transparent "hole" is moved within the Qt viewport window, and (b) the video window is moved or resized. The fact that these updates cannot be synchronized leads to artefacts during rapid item updates.
- Advanced video use cases such as applying post-processing or transition effects written in GLSL, or rendering video into 3D scenes, cannot be supported.
In order to fix these problems, the scope of this task is as follows:
- Provide an implementation of QVideoRendererControl capable of returning video frames to the client as either EGLImageKHR, GL texture or VGImage handles.
- Modify the existing QGraphicsVideoItem implementation to use QVideoRendererControl by default, on devices on which it is supported.
Note that using the EGL rendering path, rather than the existing "widget" (straight-to-compositor) path may incur additional computational/memory/power overhead, since the GLES/VG engine must be used in addition to the compositor. It would therefore be desirable for QGraphicsVideoItem to have the ability to switch back to the "widget" path when the following conditions are met:
- The video item is full screen
- The QPaintDevice into which the video item is rendered is a QWidget (as opposed to, for instance, a QGLFrameBufferObject, which would be the case if a QML ShaderEffectItem was being applied to the video)
Note: see also MOBILITY-3084