Monday 13 December 2010

SceneGraph in toolkits

SceneGraph architectures can be found from quite many of the new architectures introduced. Clutter with its latest rework on the SceneGraph to make it as retained as possible, Qt with its new SceneGraph architecture and also Mozilla's Layers architecture is getting its idea from SceneGraph. Earlier this has been the way the games render them selves but now this model is starting to define how toolkits render our 2D content.
First idea with the scene graph is that you will start to take the benefit from GPU rendering. Instead of having the applications doing the rendering in SceneGraph the compositing is done in retained mode, constructing an tree structure of the scene to be visualized. The SceneGraph paradigm and problem setting with the toolkit/widget and games is still quite different cmp'd to games. When we are discussing about 2D UI's we do not have complex geometry, fancy viewport transformations or dynamic lightnings. Instead we are mainly talking about getting our textures displayed on the screen and about blending them. This difference puts also requirements to the actual traverse algorithm and renderer to ensure that we can take everything out from the GPU from 2D toolkits point of view. Also the problem statement is rather aiming for optimizing of the performance than managing the complexity of the view.

With the GPU there is one particular thing that it cannot execute efficiently - changing states. What the GPU's are designed to do is to load in the shader program and do their stuff. Simple. With the imperative way of doing the rendering where the view is constructed by hierarchically rendering your scene by the application you are bound to do a lot of state changes. This as every single application is in charge of rendering itself - widget by widget and line-by-line, constantly loading content to the GPU. So what we want to do is to feed our GPU as much data as possible and letting it do its job. This by grouping our geometry, shaders and textures data together.
The most important data (or node) from toolkits point of view would be our appearance or geometry node defining our graphics primitives (vertices data and texture info. Data required for your widget rendering). This data is something that we want to store not only per-frame but per lifetime of an application to minimize the state changes.

Second important item is the drawing it self. Although GPU's are designed for drawing, the unnecessary drawing is naturally something that we don't want to do. An easy way to improve in this area is to draw less, or draw only what is visible on the screen. This is exactly what our SceneGraph architectures want to do. With its tree structure the SceneGraph model can smartly use the z buffer to avoid the overdraw. And by this not meaning the computation time clipping and depth buffering that the GPU's are able to do, but rather limiting the objects actually sent to the GPU. This is done by selecting the rendering order smartly (front-to-back when possible) or for ex in Mozilla's Layer case placing the transparent areas into their own layer. The challenge is of course that with the modern UI's we are discussing about a lot of transparency which will force us to do back-to-front rendering.

There are of course also challenges. The SceneGraphs are balancing between the memory consumption (caching too much) and performance. Other challenge is the different nature of different GPUs and drivers as they sometimes behave a little differently and do different things well.
With the toolkit SceneGraphs the challenge is also the target. Are they only aiming for optimized 2D rendering or would they also like to serve the games.