QPixmapCache does just what it says: it's an LRU cache of QPixmaps and only QPixmaps. The docs have this note: "QPixmapCache is only usable from the application's main thread. Access from other threads will be ignored and return failure." Its cache limit defaults to 10MiB but can be adjusted via setCacheLimit(). The users of this class include QIcon, QIconLoader, QPixmap, QBrush, QPainter, QCommonStyle, various concrete styles, QGraphicsItem, QItemDelegate (and I wonder if I missed any).
qquickpixmapcache.cpp contains a pile of classes in qtdeclarative that do much more.
- There's a cache of QQuickPixmap instances (via the internal QQuickPixmapStore's QHash<QQuickPixmapKey, QQuickPixmapData *>)
- The cache limit is hard-coded to 2MiB... but that doesn't seem to affect all the images in the cache, only those that come from the QQuickTextureFactory. (That's not clear until you look at the implementation of QQuickPixmapData::cost().) I'm not quite sure what that means, but suspect that the code mainly cares about conserving GPU memory. It doesn't seem to manage system memory usage the way QPixmapCache does.
- QQuickPixmap might store either a QPixmap or a QImage, and/or might potentially represent a texture
- We have code here to fetch images from web servers (but only images, not other resources).
- We can fetch and render images either in the main thread or in a worker thread (see QQuickPixmapReaderThreadObject and QQuickPixmapReader::run()).
- Uses QImageReader, which in turn uses any QImageIOHandler subclass (that's how we get support for all the image formats in Qt Quick).
Maybe we should consider having a common architecture for image loading and caching in both Qt Quick and Widgets. If we go with a Qt Quick-like architecture, the cache class is the entry point for loading, because it needs to check whether it already has the rendered image, otherwise use the QImageReader. It doesn't have to be that way, though.
It's possible to write a QImageIOHandler subclass that needs to do complex rendering; for example the way we treat SVG and PDF as image formats, but also when scaling/cropping/rotating ordinary raster image formats. It should be done in a worker thread to avoid blocking the UI. So the fact that QPixmapCache only works on the main thread becomes a limitation for widget applications, if we think about having an architecture where rendering is done in another thread and the result is immediately cached.
When it comes to loading resources from the network, there are conflicting goals if we want to refactor and simplify:
- loading any kind of resource could be done via common code (qml, JS libraries, html, markdown, plain text, images, and various other file types could potentially come from web servers)
- network loading always needs to be threaded (and this was probably the initial motivation for introducing the asynchronous property to Qt Quick Image)
- both widget and quick applications need to load diverse kinds of resources from the network
- any long-lived resource that is loaded from the network should probably be cached to avoid doing that again later
So it seems that we ought to think holistically about the architecture for threaded loading of images, caching the results, and to what degree that can be coupled to shared code for loading resources from network servers (and caching those).
One way would be to start moving some of the QtQuick-specific features found in qquickpixmapcache.cpp into appropriate places in qtgui and qtcore.
We should fix the ongoing animated GIF debacle while we are at it. That's hard because animated movies come in all sizes, and the strategy for rendering a tight loop of animated frames to make some little animated icon needs to be different from rendering such a large animated GIF that the whole set of decoded frames end up being too large for system memory, let alone GPU memory. In Qt's network classes there is the goal to have zero-copy networking. Likewise if you decode all frames of a tiny animated GIF entirely into a texture atlas, you shouldn't need to keep it in system memory anymore after uploading; but if you have a large animated gif as a local file, it would be fine to decode one frame at a time, upload a texture, display it, and then decode another, without caching those frames. Or there could be an in-between case where you can't afford to cache all the frames as textures, but you can afford to cache them in system memory. The architecture for loading and caching has to be flexible enough to accommodate all three of those cases, both when the original file is local and when it's on a web server.