Currently the timeline renderer uses imperative painting to draw the visible range. This has to be redone every time the timeline is scrolled. As the renderer is quite fast this is only a problem for the most extreme cases right now. However, the complexity of the data displayed won't decrease and the current approach doesn't scale much further.
By using QSGNode and friends the events from the model could be transformed into geometry that could be rendered in a hardware accelerated way by the scene graph. On top of that, we can keep the nodes around and not redraw every time the timeline is moved. The actual display can be done by a vertex shader that receives the X and Y offsets as well as the current scale as parameters.
There are probably limits to that approach. Loading the whole trace onto the GPU and running shader programs on that is probably slower than selecting only parts of the trace for imperative drawing in certain situations, especially for high zoom factors, large traces or weak GPUs. In order to handle that, the TimelineRenderer may need to treat the scene graph as a cache of the model, adding and removing nodes as needed but maybe not for every scroll event.