We will do so from a technical point of view, exploring some of the various systems I developed for the game. With my role as team-lead and graphics programmer, this mostly means that we will mostly be looking at graphical effects.
A few days ago I was idly scrolling through my twitter history reminiscing about the last few years of my activities as game developer. For about two years a major part of that – and my life in general – has been Roche Fusion, the game project I started together with Tom Rijnbeek in September 2013, and which we finally finished – with a team of 7 – in January 2015.
For today’s post I thought I step through the major points of Roche Fusion’s early development from a technical standpoint, and give some insight in our development process.
Crepuscular rays, volumetric rays, or god rays have become a common effect in games. They are used especially in first and third person games – where the player can easily look at the sky – to give the sun, moon, or other bright light sources additional impact and create atmosphere.
Depending on how the effect is used it can emphasize the hot humidity of a rain forest, the creepiness of a foggy swamp, or the desolation of a post-apocalyptic scene.
While there are multiple ways to achieve this and similar effects, the method we will look at in particular is the approximation of volumetric light scattering as a post-process using screen-space ray marching.
In this article we will quickly step through the idea behind this algorithm, and how it is commonly used. We will then show how we can easily expand from there to create a solution that works well with multiple light sources, including some source code and images.
This post is the first in a series on design patterns in game programming.
Design patterns play an important role in computer programming. Not every problem can be solved with a pattern, and not every pattern is useful in all circumstances. However, they can be powerful thinking tools when applied to the right kinds of problem and help us understand and design solutions quickly and without reinventing the wheel every time. Similarly, they can aid us in communicating our ideas efficiently to others.
Today, I would like to take a look at the builder pattern.
I will not go into the formal definition of the pattern itself – there are enough other sources for that. Instead we will look at just one example where I use the builder pattern in my C# OpenGL graphics library.
I will first show how I solved the problem in question previously, point out what I did not like about that solution, and then show how we can apply the builder pattern to a much nicer way of doing ultimately the same thing.
Last week I showed how we can leverage the power of the GPU to render huge numbers of particles with great performance.
I stated that by using the GPU to simulate our particles we can get much better performance than if we were using the CPU only. However – and while you may believe me – I provided no evidence that this is in fact the case.
That is something I want to rectify today.
Last week I wrote about how we can use parametric equations in particle systems.
Doing so allowed us to eliminate mutability from our particles. In that post I already hinted that this property allows us to easily move our particles to be simulated on the GPU.
That is what we will do today!
OpenGL and C# are two of my favourite technologies.
In this post I would like to give a small intro on how to develop games or other 3D accelerated applications using them together.
We will go over:
- how to create a window with an OpenGL context that we can render to;
- how to create vertex buffers, load shaders, and render vertices.