Details
-
Bug
-
Resolution: Unresolved
-
P2: Important
-
None
-
5.15.7
-
None
Description
minimum example 1:
I minimized the problematic program to use only one qml ShaderEffect component with a file contains the shader source code with "shadow" :
- the running systemd service of the application (type=simple, user=root) is displayed without effect
- the running application in the terminal is displayed With effect
in both cases the environment is the same
minimum example 2:
I added one button component and an onClick handler to customize the shader fragment file with a click, the default effect is "blur", which is set by setting the ShaderEffect property, then after launching I click on the button and install a new shader file with "shadow" :
- in the running systemd service of the application (type=simple, user=root), the element is displayed WITHOUT the "blur" effect
- when clicking on the button, the element is displayed With the "shadow" effect
And vice versa
system information :
- Architecture : IMX8MP
- kernel 5.15
- Qt 5.15.7 (on any version of the effects module)
- Vivante GC7000UL", "Vivante Corporation", "OpenGL ES 3.1 V6.4.3.p4.398061"
- sys plugin : eglfs
- environment : [QT_QPA_PLATFORM=eglfs
FB_MULTI_BUFFER=2
QT_QPA_EGLFS_FORCEVSYNC=1
QT_QPA_EGLFS_HIDECURSOR=1
QML2_IMPORT_PATH=/data/user/qt/qmlplugins
QT_IM_MODULE=qtvirtualkeyboard
QTWEBENGINE_DISABLE_SANDBOX=1
XDG_RUNTIME_DIR=/run/user/0
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/dbus_session_socket
QT_QPA_EGLFS_FORCE888=1
QT_QPA_EGLFS_KMS_ATOMIC=1
QT_QPA_EGLFS_KMS_CONFIG=/etc/kms.conf
QT_GSTREAMER_PLAYBIN_CONVERT=imxvideoconvert_g2d
QSG_RENDERER_DEBUG="render upload build change"
QT_LOGGING_RULES="qt.opengl.diskcache=true;qt.scenegraph.renderloop=true"
QT_DISABLE_SHADER_CACHE=1]
additional information :
- the shader source code is the same for all stages of rendering in Qt
- BUT the shader binary PROGRAM is different in the case of systemd and in the case of terminal (also, the binary code of the program in the case of systemd is smaller in size than in the case of terminal)
- I cannot disasm the binary of the program for comparison, because there are no utilities required from the vendor
history :
I have already looked at the Qt sources and everything seems to be in order, at least I see that all accesses to the opengl context correspond to the documentation, at first I assumed that there was an error somewhere in Qt, but now it already seems to me that this is a backend implementation error. I do not have access to the source code of the vendor's implementation.
I have already debugged and registered many Qt modules (QtGraphicalEffects, QtDeclarative, QtGui,...) to find out what the problem is, at the moment I see that the entire chain of calls, starting with setting a property in qml and ending with a direct call to the opengl API compilation function, in my opinion, is correct, I also checked the content of the shader source code at all stages of Qt rendering, including after compilation, is "the same" (correct) up to the added standard permanent shader program headers in qopenglshaderprogram.cpp .
I also disabled the QCoreApplication::setAttribute(Qt::AA_DisableShaderDiskCache) shader cache - this does not make any changes, besides, at the very first experiments, I started cleaning the cache of both programs and shaders every time the program starts again.
I removed the logs of the system in the described modes, attach the program startup log from systemd, but it is no different from the log when starting the program from the console (there is a working option / effect), there is no information about the size of binaries in this log now - since I disabled working with the cache, and logging the size of the binary is in one of the dependent methods.
I also introduced a delay of 500 ms between the compilation of each subsequent shader - this did not change the behavior (there is no effect)
I also checked the launch of the executable file itself from systemd. To do this, I created a script in which I launched the already executable application file, and the script itself was launched from the systemd service - this change was supposed to allow checking the application launch policies from under systemd and also make it possible to experiment with changing the type of service to 'forking' which would also lead to a change in startup policies and the startup algorithm itself. The experiment did not yield any significant result, it seems that the problem is inherited from the parent to the child process, no matter how long the 'fork' chain is.