Priority: P2: Important
Affects Version/s: 5.15
Fix Version/s: None
Component/s: GUI: Complex Input methods
IM events are sent with the purpose of asking for information back again. I have doubts about that being the best way to do it.
It's like taking a survey with only a predefined set of questions: if you ask whether X is true, the answer doesn't tell you the whole picture about the set of facts [X !Y Z] at a point in time, even if it does offer a clue. (A bit like a political survey trying to determine one's party affiliation by asking only two questions about what the person believes.)
The context is a PDF view on a iPad, or any other touchscreen: the user naturally expects to be able to select text and copy it to the clipboard. It's not a conventional text-input scenario, but it needs to work the same. The only way we have in Qt to control those platform-level text selection handles is to play the Input Methods game.
Simply asking the PDF view "where is the cursor", as the entry point for the whole conversation, doesn't do enough to help the PDF view position the rightmost text selection handle. Until that moment it wasn't really expecting to have a cursor; and in order to pretend that there is one, it has to do hit-testing; which is based on a position; which happens to come through as a QVariant along with an inputMethodQuery. It's completely backwards from what I as the developer of that feature would find intuitive. I know there are input handles available; I want to detect the long-press myself, and ask the platform to position those handles at the right places. Instead the platform intercepts the long-press and starts asking inane questions one-at-a-time by sending events, never letting me see the big picture about what's going on. And instead of being able to make imperative calls, I have to call QGuiApplication::inputMethod()->update() to ask the platform to ask me certain questions again!
I can't control what's on the popover edit menu either, because there's no API for popping up that kind of menu in the first place.
In summary it's too magic, and magic always breaks eventually.
It's also similar to the situation with QScroller/QScrollEvent: the one is only useful to the other, and it's not the straightforward direct way of implementing smooth scrolling, but only done maybe as a way of avoiding widget API changes and maybe working around limitations on some platforms .
I know platforms have limitations, and applications that don't want to deal with input method details themselves can be written more simply if input methods just work. It still seems like maybe we could come up with something better than using events for round-trip queries and responses though. (X11 is like that too, but that's because it's a wire protocol.)
So solving this is a matter of coming up with some sort of compromise: how much control can we give the application developer vs. how much control will all the platforms let us have.
I want to implement text-selection handles on Linux. And of course there I could do anything, since I will have to do everything because nothing is already done. But I'm afraid that working with the existing communicaton channel is going to be just as frustrating on that end as it is on the application end.