We are seeing customers creating more and more complex embedded systems where three main use cases, and various combinations of these cases, are only partially supported by Qt:
- Single Soc running several displays, e.g. in automotive a instrument cluster, center main screen, head up display and passanger screens.
- Multiple SoC's (or virtual machnines) addressing a single shared screen, sometimes on overlays or different parts of theh screen
- Headless use cases where the screen is remote on another device
- Combinations of all the above especially in the automotive industry that is pushing the envelope on these HW and SW capabilities
- Design and developer tools need to support creating multi screen solutions where context, focus and actual content may fluently move from one physical display to another. The displays may be based on different technologies and have different size, resolution and aspect ratios.
- Gesture and input framework needs to flexible in a way that it supports environment where context and focus may change, and the user inputs (also audio) are directed at correct entity / application in the SW framework
- Application manager and launcher
- Gesture and input frameworks
- Containers, hypervisors, virtual machines
- VNC, WebGL
- Qt Remote Objects
- BT, BTLE
- TCP/IP over cellular, IoT network, WLAN or cable (both USB and 10-Base-T)
- M2M protocols; KNX, MQTT, DDS
- Websockets / RestAPI