We will do so from a technical point of view, exploring some of the various systems I developed for the game. With my role as team-lead and graphics programmer, this mostly means that we will mostly be looking at graphical effects.
A few days ago I was idly scrolling through my twitter history reminiscing about the last few years of my activities as game developer. For about two years a major part of that – and my life in general – has been Roche Fusion, the game project I started together with Tom Rijnbeek in September 2013, and which we finally finished – with a team of 7 – in January 2015.
For today’s post I thought I step through the major points of Roche Fusion’s early development from a technical standpoint, and give some insight in our development process.
Crepuscular rays, volumetric rays, or god rays have become a common effect in games. They are used especially in first and third person games – where the player can easily look at the sky – to give the sun, moon, or other bright light sources additional impact and create atmosphere.
Depending on how the effect is used it can emphasize the hot humidity of a rain forest, the creepiness of a foggy swamp, or the desolation of a post-apocalyptic scene.
While there are multiple ways to achieve this and similar effects, the method we will look at in particular is the approximation of volumetric light scattering as a post-process using screen-space ray marching.
In this article we will quickly step through the idea behind this algorithm, and how it is commonly used. We will then show how we can easily expand from there to create a solution that works well with multiple light sources, including some source code and images.
One feature that almost all games have in common is loading screens.
Many small games can get away with a single loading screen at the start of the game to load all or most assets.
Most larger games have one before every level or environment, or at least for major transitions. The only way to get around loading screens entirely is to have only a small core engine that is loaded at the beginning of the game, and to then load everything else (other engine parts and frameworks, assets, script files..) on the fly.
To make the latter feasible without causing the game to freeze at times, concurrency is a necessity – content has to be loaded in the background while different threads update the gamestate, render it, and keep the interface responsive.
This is a complex topic however, and too big to tackle in this post.
Instead we will look at a smaller problem and how to solve it.
Last week I showed how we can leverage the power of the GPU to render huge numbers of particles with great performance.
I stated that by using the GPU to simulate our particles we can get much better performance than if we were using the CPU only. However – and while you may believe me – I provided no evidence that this is in fact the case.
That is something I want to rectify today.
Last week I wrote about how we can use parametric equations in particle systems.
Doing so allowed us to eliminate mutability from our particles. In that post I already hinted that this property allows us to easily move our particles to be simulated on the GPU.
That is what we will do today!
Computer graphics has always been a major area of interest – and I would like to say expertise – for me.
Within graphics, particles and particle systems have played a big role since the days of the first video games.
It is not uncommon for games these days to often have thousands, if not tens or hundreds of thousands of particles on the screen at the same time. In fact, games without particles are a rare exception.
There are many topics that can be discussed when talking about particles, and I am sure I will cover many of them in the future.
Today I want to introduce the concept of parametric particles.
This is no grand effect, or even overly difficult, but it is a technique that every graphics programmer should be aware of. Even when not using it directly, I have found it useful when thinking about particle simulation.
Further, it provides a stepping stone for simple particle simulation on the GPU, which is a topic I want to cover in the near future.
Today I want to delve a bit further into graphics programming and look into one specific effect we used in Roche Fusion: Pixelation.
We use the effect in the game as a visual queue for when the player takes damage and their health falls to dangerous levels.
Specifically, the post processing effect we apply pixelates the edges of the screen significantly, while leaving the center, and to some degree the bottom corners mostly untouched.
This allows the player to still continue playing, and to inspect their HUD, but it gives a clear and unmistakable indication of danger.
Of course, the effect could also be used for other purposes, such as transitions between levels, or even a major part of the art style.