Brainstorming civilian AI

One of the projects I am currently working on is a game called Centipede in which the player controls a giant centipede in a city with the goal to cause as much havoc as possible.

Next to damaging and destroying buildings, vehicles, and other static objects, there are of course a large number of humans to serve as snacks, or – if armed – nuisances.

To make the game fun and engaging, it is important that these simulated humans react at least somewhat realistically to the chaos developing around them.

Ideally, their behaviour would also be reasonably complex and not entirely predictable, to make interacting with them more interesting.

Doing my own thing

Of course there are many games with very convincing artificial intelligence, and equally many approaches and resources to look into.

Before I let myself be swayed too much by how others are doing it, I thought I write down my own thoughts on how I would approach the problem.

Later, I may do some more or less extensive research, and compare what I find out with my own ideas, and potentially modify them – or maybe throw everything over entirely.

Simulating the human mind

Since I want achieve at least a facade of realism, I thought it may be best to try and model the human mind – albeit of cause grossly simplified.

From that starting point there are a few key aspects that follow naturally:

  • Each AI agent will keep its own state of mind
  • Information between agents can be shared only in certain circumstances
  • Agents will observe the world around them and perceive certain events
  • An agent can change its state of mind based on perceived events
  • An agent actions depend entirely on its state of mind

Also following from these points is that any apparent group behaviour of multiple agents – possible working together, or at least following similar courses of action – will be either spontaneous, or planned and communicated by one or more specific agents.

By allowing agents to perceive and respond to the world, as well as to communicate with other agents, this system will hopefully lead to a variety of emerging behaviour.

In fact, it is one of my goals to make this system complex enough that emerging behaviour is inevitable. This will be both interesting for an observer, as the AI reacts in unexpected ways, and it can also create more enjoyment for the player, as the game presents them with unpredictable challenges.

Perceiving events

The perception of the agents is in many ways the easiest part of the system.

From the different human senses, it is enough if we approximate and limit ourselves to the senses of vision and hearing.

When events happen in the game they will know how easily visible and how loud they are. We can then check this event against all agents, and have those that are close enough to perceive it react accordingly.

For example, an explosion might be so loud that it can be heard from 100 meters away. In that case all agents within that radius are notified that they heard an explosion at that position.

Maybe the flash of the explosion is visible from 200 meters away. In that case, all agents within 200 meters will receive an appropriate notification as well.

In the case of vision we may also want to use some simple ray casting or some heuristic to determine if the agent has a clear line of sight. If they do not, it may be more realistic to not have them perceive the event directly.

Non-event based perception

Of course our agents are also able to look around and perceive the world, objects, and other agents without distinct events taking place.

This information is invaluable for the agent to understand the situation they find themselves in and act accordingly.

Acting based on state of mind

Allowing our agents to take meaningful actions is mostly a technical challenge. Most of this will evolve around pathfinding, and interpreting the chaotic environment close to the agent in a useful way.

For example, the agent may need to know good spots to take cover, or how to move around obstacles.

We could try and make the agent figure these things our on their own of course. However, that would complicate their AI a lot and may make it impossible to simulate many thousands of agents at the same time – which is my goal.

Instead, we can write systems designed to specifically solve these problems in efficient ways.

While this will be much better for performance, it may also remove some of the spontaneity and possibility for emerging behaviours.

So it is not unlikely that I will change my mind on how exactly this trade-off should be made, and how much control to take away from the agent.

State of mind

The most difficult part of implementing this AI will probably be finding a good representation for each agent’s state of mind.

In the simplest case we could go with a straight-forward state machine, where each state performs a certain kind of behaviour like ‘running away’, or ‘hiding in cover’. Events and other perceptions would cause the agent to shift from one state to another.

In principle a state machine could implement behaviour of any complexity. However, it may not be flexible enough to handle more interesting behaviour without a large number of states, or lots of special cases.

Another option would be to use a state machine for only the possible behaviours and actions of the agent, but writing a different system to decide which behaviour is appropriate for the situation.

To return to the inspiration of a real human mind, this system could keep track of the agents knowledge, as well as its current emotions and thoughts.

For example, seeing the monstrous player character, an agent might get ‘scared’, and gain the knowledge of the monster’s location. This combination might make the AI decide that it is appropriate to run away from the monster.

On the other hand, if the agent does not know the player’s location, but are still scared, they could find a place to hide.

If a character is sufficiently scared, they might panic and start behaving irrationally, which could assert itself in a number of ways.


Clearly, I will have a lot of thinking to do on this.

Instead of only thinking and not actually implementing anything, I will soon start experimenting with some of these ideas and report back on my progress.

For now, let me know if you have any experience with this sort of game AI, or if you have any other comments or ideas on the topic.

Enjoy the pixels!

Leave a Reply

One comment

  1. Computerology says:

    This post is exactly what I’ve come up with on my own game, I’m glad my idea appears to be valid.