Details
Description
As noted in a comment in renderer.cpp, the QOpenGLContext in the initialize() method does not have any kind of format specification – it is left entirely to the default format. Ideally, it should get the format information from the Qt3DWindow (or actual rendering surface) but it doesn't look like this info is available in this context? Not on the renderer object itself directly at least.
In any case, in the meantime it would be highly preferable to use the format logic from Qt3DWindow to ensure that the best GL version is used. As it is now, it falls back on 2.1 at least on OSX, instead of using gl3. Here is a patch that does this:
diff --git a/src/render/backend/renderer.cpp b/src/render/backend/renderer.cpp index 825dfe9..d1ecdf2 100644 --- a/src/render/backend/renderer.cpp +++ b/src/render/backend/renderer.cpp @@ -350,6 +350,20 @@ void Renderer::initialize() ctx = new QOpenGLContext; ctx->setShareContext(qt_gl_global_share_context()); + QSurfaceFormat format; +#ifdef QT_OPENGL_ES_2 + format.setRenderableType(QSurfaceFormat::OpenGLES); +#else + if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) { + format.setVersion(4, 3); + format.setProfile(QSurfaceFormat::CoreProfile); + } +#endif + format.setDepthBufferSize(24); + format.setSamples(4); + format.setStencilBufferSize(8); + ctx->setFormat(format); + // TO DO: Shouldn't we use the highest context available and trust // QOpenGLContext to fall back on the best lowest supported ? const QByteArray debugLoggingMode = qgetenv("QT3DRENDER_DEBUG_LOGGING");