In some cases, the `@nx` hi-dpi pixmap versions of a QIcon are not used but would result in a better visual result:
- For a QIcon with the following pixmaps available: 64x64@1x, 64x64@2x; When displaying the icon with a device independent size of 64x64 on a device with display scaling set to 125%, the 64x64@1x version of the icon is upscaled to 80x80 native pixels. However, downscaling the 64x64@2x version gives much better results.
- For a QIcon with the following pixmaps available: 24x24@1x, 24x24@2x; When displaying the icon with a device independent size of 22x22 on a device with display scaling set to 100%, the 24x24@1x version of the icon is downscaled to 22x22 native pixels with a rather blurry result (due to bilinear scaling being used). Similarily, downscaling the 24x24@2x version instead yields a crisp result.
It seems to me the algorithm responsible for selecting the QIcon pixmaps should optimize for following variables:
- High quality end result (Avoid upscaling, avoid downscaling by a small factor).
- Prefer pixmaps with the requested level of detail.
Maybe the following would be appropriate:
- Look for the first available device independent size larger than the requested paint size (if not found, the largest device independent size). Then select the first `@xn` version for that size larger than some arbitrary factor (to avoid downscaling by a small ratio) times the native requested paint size (if not found use the largest `@nx` version for that size).
- If 1. failed (the returned pixmap would cause upscaling), ignore the level of detail constraint. Select the first native pixmap size larger than the requested native paint size times the arbitrary factor (if not found the largest native size). Disambiguate by choosing the first pixmap with a `@nx` factor larger or equal to the DPR.