Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence, part 4: Congestion

Here is the 4th part of my Win That War’s IA series. Today, I’ll write about flocking behavior.

Let’s start with a video. A tanks army is riding through a canyon. If you give a look to the left side of the video you can see that the units are lining up, which put them at the mercy of an ambush. It also takes a while before they finally cross the mountain. Now, the right side of the video. As you can see, the units take the canyon’s entire width, and quickly arrive to the other side.

This noticeable improvement is simply due to the use of a flocking algorithm.

Because of my early affection for the flowfield technique, which is notably used in Planetary Annihilation and Supreme Commander 2, until now I neglected this type of algorithm.  Actually, a well settled flowfield allows to go without using flocking: the units have a natural behavior, then the congestion and collision issues are efficiently solved.

But here is the thing: I figured that using the  flowfield generates two problems. First one is that it’s hard to optimize, and that Win That War! maps vastness involves a huge memory consumption. The second one is that the paths computed by the flowfield solver are approximative, and it’s a real issue when a robot has to go through a dense base, slaloming between hard-to-cross buildings.

So… I just threw  everything to get back to a more simple but efficient “Weighted A*” algorithm. IA code just got more “thin”, and the performances got better. But, you know, nothing is that easy, and with that kind of algorithm, we fall right back into congestion issues. That may be why you had a bad time crossing mountains with a big army in the Alpha 0.2 version.

 

That’s why I decided to add a flocking stage.

Your fingers are getting itchy and you want to try this by yourself? You’ll find a nice introducing method here.

In the end, the choice of a pathfinding algorithm really has a strong impact on the game, not only on  the bugs… Flocking makes the game a little more “nervous”, and units behavior seems different.

The future will tell us if we made a good choice.

 

Artificial Intelligence part 3: Teamwork when moving

Last week, I improved the assignment of arrival positions. Well… one 2 short videos are worth a thousand words.

In the video below, you ask a group of tanks to go close to a radar tower:

If you ask them to attack, they must position themselves so as not to prevent other ones from reaching the target:

See you,
Etham

Artificial Intelligence, part 2 : BIG map

Hi!
Today I am talking pathfinding optimization.

Win That War! players may be helped by AI to manage a big amount of units inside a large-scale map. Before diving further into game-logic code, though, I made a stop to optimize a bit my pathfinding system.

Finding the shorter path to a long distance objective can take a long time, GPS users may have experienced it. Hopefully, this path does not need to be computed very often.

Contrary to a GPS which works on a static map, AI works on a dynamic one:

  • buildings can be syntonized by an engineer at anytime, to block a way point ;
  • units can move to interpose ;
  • the safety along the path can also evolve, which may lead to choose another path.

So we must update the path regularly.

To reduce CPU consumption, I tried to recompute every path at a regular interval. For a big amount of units I placed by hope on the low of large numbers. But unfortunately, this produces jitter which affects the fluidity of the game. So I transform the solver to make it run in an incremental manner : a small part of the problem is solved at each frame, using a multi-threaded scheduler.

A lot of work of optimization remains, but AI is already able to manager 30 groups of 36 units at the same time. In the video below, they are placed at random on a 100 square kilometers map. Half of the groups are moving to a random point.

Well, I still need to assign properly the target positions so that tanks restore their formation instead of clumping like a bunch of bugs!

See you next time,

Etham

Artificial Intelligence part 1: path-finding with collision avoidance

…and then I chose Flowfield

According to what I have found over the web, whenever you want to put a first step in game AI development, you have to start by A* algorithm.

Perhaps because I do not want to do like everybody, I preferred to try a different approach : the flowfield path-finding, as used in Supreme Commander 2. A good start is this very good paper.

Contrary to A* which implies that we have to compute the path of all agent separately, one interest of this approach is that the calculation is done only once. A cost field is globally computed over the map, from which the speed and direction of each agent are then deduced.

A wolf in the AI

For example, imagine a dozen of wolves attacking a poor, lonely, human. Every wolf has the same objective:  “bite the human”. The distance field to that human is common to all the wolves and can be computed once only per step. Other fields like density, comfort or threat can also be computed once for all. If there is more than one human on the map, the common target can become “bite the nearest human”, and so on we can also set high-level objectives.

In the next video,  you can see a round unit moving toward its objective (a square). The cost field to reach that target is in blue. The speed of the unit is computed based on field values in the surrounding squares.

In Robinson 2150, the terrain is about to be a very big and procedurally generated planet. That’s why the field is not computed over the whole map, but we stop as soon as every unit has been reached.

Now we have a bunch of units moving according to a field, we can add collision-enforcement and try a group-crossing simulation. In fact, crowd simulation can be seen as path-finding with moving obstacles. The first try was fun… Adding speed and density fields in the equation shown acceptable results :

At this point, our units know how to move towards an objective, but they still cannot choose this objective by themselves. This will be the next step of my work.

Return top