Today I want to delve a bit further into graphics programming and look into one specific effect we used in Roche Fusion: Pixelation.
We use the effect in the game as a visual queue for when the player takes damage and their health falls to dangerous levels.
Specifically, the post processing effect we apply pixelates the edges of the screen significantly, while leaving the center, and to some degree the bottom corners mostly untouched.
This allows the player to still continue playing, and to inspect their HUD, but it gives a clear and unmistakable indication of danger.
Of course, the effect could also be used for other purposes, such as transitions between levels, or even a major part of the art style.
The idea of the effect is fairly simple: We want to enlarge the pixels on the screen. Of course, we cannot do so with all pixels at the same time, so what we really want to do is to draw fewer pixels than before, and draw these few pixels larger.
Note:
You can find a stand-alone example project, powered by my own C# OpenGL graphics library including the algorithms discussed below on GitHub here.
A simple approach
Possibly the easiest thing we can do to achieve a pixelation effect is to sample our rendered image – or any texture in general – with a step function, instead of a linear function.
We can do this in a simple fragment shader in three easy steps:
- scale uv-coordinates from the range [0, 1] to the range [0, N], where N is the number of pixels we would like per row/column,
- round the scaled uv-coordinates up/down,
- scale the uv-coordinates back to [0, 1].
The code for this is equally simple:
vec2 newUV = floor(oldUV * N) / N;
If we use this simple algorithm and sample our image with the new uv-coordinates, we might achieve something like the following image.
We can also easily set the size of the pixels using shader uniforms, changing the effect on the fly. The following video shows how that might look.
This is pretty cool, especially given how simple and easy it is.
This algorithm works great when we want the entire screen or image to have the same degree of pixelation.
However, if we try to use it for different degrees of pixelation on different parts of the image, we encounter visual artifacts which may be undesirably. They certainly would not have worked in Roche Fusion.
See below for a video of how the resulting image looks if we apply different pixel-scales across the image.
Note how due to the increasing pixelation the pixel-grid becomes curved and almost appears to rotate in three dimensions. Further, within the enlarged pixels there are bright streaks perpendicular to the gradient of pixel-scale.
While this effect might be appropriate for some applications, it is not really what we want to achieve, when we talk about pixelation.
A different solution
The problem with the algorithm explained above is that to increase pixelation we scale the pixel-grid itself.
To not get the same artifacts we need to use a consistent and fixed grid instead.
With the grid fixed, we can now only change what happens inside each grid cell, and this is exactly what we will do.
Mathematically, we take both our linear function, and our step function, and interpolate between them based on the desired strength of the pixelation.
Visually, we take the little rectangle inside each fixed grid cell and expand it, drawing less and less of it into the original rectangle, until only a single pixel can be seen.
Here is a video of the exact same situation as before, but with this new algorithm:
The effect is significantly more subtle and most of the previously apparent movement in the image is now gone.
Note how partly pixelated areas now still have some texture within the larger pixels, but how the image quality becomes gradually worse – or for the purpose of this post: better – the closer we get to the heavily pixelated areas.
Using this algorithm, pixelating the entire screen also looks significantly different than before:
One can argue whether the blurry appearance of partly pixelated areas is a desirable side effect.
For our purposes in Roche Fusion, this algorithm is a clear winner since it allows us very fine control about how distorted each part of the screen will be rendered.
In principle the two approaches could of course be combined, by setting a screen-wide pixel-size, but using the second algorithm to fill in those pixels.
Conclusion
I hope the above serves as a clear explanation of two slightly different approaches to pixelate an image, or a game using post processing.
Of course, feel free to let me know what you think and whether this has been useful to you, or if you have any questions in the comments below.
If you would like to play around with the algorithms yourself, feel free to download my small stand-alone example project here.
Lastly, let me know if there are any other effects or topics in general that you would like me to write about, and I will do my best to prioritise them.
For now,
Enjoy the pixels!