Archive for the ‘Graphics Programming’ Category

the decal system part 2

You missed the part 1? Keep calm and click here.

Long distance decals

The problem with the artillery in the game is that you don’t have a clear idea of its possibilities. For example, you’d like to park it behind a mountain and target the enemy base, forcing him to bypass the relief and loose a precious time. However, since it is impossible to have the base and your artillery in the same screen, you cannot know how your artillery would react to such an order. In fact, if not at range, it will move right into the enemy base and be destroyed. The point is, when you start thinking strategically and not just a-clicking the enemy, you need critical information, such as the range of a unit.

Now technically, how should we do it ? We could make a decal from a texture, as we did with the selection circle, but that would be a really large area, mostly empty, so it may not be the best option. What we can do, is using the GPU to render vector art on the screen. I’ll explain it later, but if you want to know more about it you can check this chapter from GPU Gems 3 :

But, what is a vector art?

The most common way to consider an image on a computer is basically with a table of colours. When you create an image you have to define its size, which is the number of pixels it contains, and each one of these pixels will have a colour. So all you have to write in the file are the colours of the pixels in the right order, and the GPU will be able to print the image whenever you need it. However, this induces a small problem when it comes to scaling the image.

Picture yourself in a FPS, and in front of you is a brick wall. You are a few meters away from the wall, and can admire the beautiful brick texture the graphic designer created. However, as you get closer, this texture takes a bigger part of your screen, since it is larger, in pixels, than its original size. At this point the details of the texture are less accurate since the colour of a pixel is extended to 5 or 6 adjacent pixels. We could increase the size of the texture, but then the file would be two times bigger, and we want to avoid a heavy file that will take longer to load.


The classic way to overcome this difficulty is vector art, used in the svg format that we need for large decals, such as ranges. The idea is to save a description of the image that doesn’t depend on its size in pixels. To do that we rather consider the curves that form the image, as “paths”. Each path is a command, a set of points, plus a colour and various options about fill, or stroke, or whatever you need, defined in the svg standard. But it will be easier with an example.

Please, draw me a triangle

Let’s make a simple triangle : all we have to do is draw three lines, with each end being the beginning of the next one. For instance,

M 0 0 L 0 100 L 100 0 Z.

This is what a path looks like, a set of letters that are commands to indicate a certain shape to draw. The numbers after it are the coordinates that describe this shape. M means “Move” the current point to this position. It represents the start of the shape, as if it was the position of a virtual mouse. Here we position this cursor in (0,0). L is the command for “Lineto”, that draws a line from the current point to the arguments. At this point we drew a line from (0,0) to (0,100) then from (0,100) to (100,0). Z is the command that closes the shape, by drawing a line to the start point. In this example the line joins (100,0) and (0,0).


We now have a triangle between the points (0,0), (0,100) and (100,0). We can add options like “fill: black” if we want to fill the shape in black, but basically that is how svg files work.

This way, we can always adapt the shape to how large our image appears on the screen, and then avoid the problem of resolution, but of course complex images can induce a lot of paths, so it is better to save it for something schematic, with only simple shapes.

Other commands (Q, C, A) are used to draw the Bezier curves that we will need for the large decals, and are a little more complex, as you will see in the next part. Yes, there will be a part 3! Wait for it.

Visibility Management

One of RTS’s fundamental aspects is visibility management. Traditionally the gamer doesn’t known the enemy’s doings and very soon in the game will send scouts or build radars in order to get intel and react in consequence. From a gameplay point of view, every unit/building has a visibility range and anything that is out of it is in the “fog of war”. The player has no information about what is going on in this dark zone. There are many variations from one game to another, some have the concept of explored / unexplored zones or radar visibility.

Of course, WTW manages visibility (commited since at least… this morning 😀 ), with 3 states :

  • No intel, only a rough terrain model
  • Radar range, highlighted terrain and the units’ radar echoes
  • Optical range, confirmation of the units’ presence, classical 3D rendering

As a conclusion, if you’re keen on power plants, here’s a video illustrating this :

See you next week,

XNA is dead… Long live SharpDX !

The rumor has been around for a while …. It is now official, XNA is dead, or soon will be, since it is to be retired on April 2014. Lots of regrets from indie game developers could be found on the net. “Those were the days !” … at least when it came to the high-level API in C# and its Content Pipeline system.

But do not worry, some solutions exist 😉 In our case, I chose to port to SharpDX, more specifically SharpDX toolkit, the high-level API based on DirectX 11. This API is yet not fully mature, but it is already efficient and has a very responsive community. Last time I had a problem, the author Alexandre Mutel provided me with a patch within the day.

It really is similar to XNA, but has the power of DirectX11, and is compatible with Windows 8 (including mobile version) and Visual Studio 2012. Of course it also works with older GPU classes : our new project WTW targets DirectX 10 cards.

The main drawback of SharpDX toolkit is the absence of the Content Pipeline. I solved the problem by using a custom MSBuild project and homemade MSBuild tasks to process the contents. I also use the lib Assimp for 3D models, and the effect compiler of SharpDX to process shaders. You can directly use DDS for the textures. As for the sounds, I don’t know, I haven’t searched yet: p

So basically, it means a little extra work, but it is worth the effort. Here is a screenshot of the new engine:


Next : the video that fits, when the porting will be complete :)

Only fools never change their minds !

In these two posts, here and here, I spoke of my tests on the level of detail management when rendering terrain …

But first… Some time ago, Timous, our master of communication, scolded me on several of my articles, including an unpublished one (there seems to be censorship even here at IU :( ):

“Timous (to the wind): Tell me, Michel, your article on optimizing things, it is not bad …

Doom (very happy): Yeah?

Timous (glacial): But it’s just incomprehensible, you kidding me or what?

Doom (sheepishly): Um, bah it’s not that complicated …

Timous: There’s not even an intro, images are ugly, and my 11 years old little sister did not understand anything!

Doom: Bah this is an “Hardcore technical” article…

Timous: I don’t care, everybody must understand!”

So I will make some effort to introduce… this time 😛

In a 3D game, each image is calculated from a scene consisting of triangles. A graphics card can process and display a fixed number of triangles per second. The purpose of the management of level of detail (or LOD)  is to reduce the number of triangles to be processed in order to optimize the display time of each image (if it is not clear let me comment :D). In these two articles, I introduced the method of LOD that I had to use… Ultimately, I changed my mind and choose another method. This is what will remain, this time, in Robinson (is it a good for intro, Timmy?)

So I’ve implemented an algorithm called CDLOD for “Distance-Dependent Continuous Level of Detail Rendering for Heightmaps”, developed by a Filip Strugar. I’ll spare you the technical details that are very well explained by the author in this document… Here is a short demonstration video of my implementation for Robinson:

The more observant will have noticed the reduction in the number of triangle as they move away from the camera, and the progressive subdivision of the mesh. This algorithm is very fast and requires no pre-calculation, contrary to my initial algorithm. However, it has the disadvantage of ignoring the kind of terrain, resulting in a simplification of very poor quality on regular or geometric terrain. But as natural scenes are usually very irregular, this should not be a problem … Life is cool, does’nt it?

HDR rendering

Last week, I’ve worked on our home-made Insanity Engine and especially on HDR rendering. High Dynamic Range rendering simply refers to lighting computation in 4x floating point  (higher range) instead of the traditional 32bits RGBA format. In addition to be able to manage very bright and dark lights, HDRR enables to use a lots of secondary effects that make the rendering more realist.

The video below shows the famous Sponza Atrium test scene displayed with the current version of Insanity, showing tone mapping, luminance adaptation and bloom effect.

The result is not too bad on my HD5870, but I still have to work on performance for lower-end systems.

Terrain rendering part 2: using irregular meshes

Grids and Meshes

A simple way of rendering terrain is to use a regular grid of vertices, each of them having its own height. The next possible evolution is to use 2D floating point texture , called heightmap, and a flat 2D regular grid of vertices as terrain mesh. Then let the GPU fetch the heightmap, transform each vertex according to its height in the heightmap and finally compute the normals. That’s the way I initially chose but I must admit that I was not very pleased with the results. Obviously, flat regions require fewer triangles than rough or hilly regions. So I have to try something…

The first screenshot below shows the render of a terrain using a regular grid. The second screenshot shows my first attempt with irregular terrain meshes.

The Teapot flying on regular grid terrain

Miss T on an irregular terrain mesh

Delaunay triangulation

This first simplification algorithm is quite simple : starting from a higher resolution  regular mesh, it recursively removes vertices if the estimated  visible error is below a certain threshold. A Delaunay triangulation is applied on the remaining vertices to obtain the final mesh. The 2 scenes above use about the same amount of triangles, but you could notice how vertex count increases near the interesting areas. The next screenshots show the same scene using the 2 techniques with solid shading.

The Miss on the regular grid

T, enjoying the irregular mesh

The second one is smoother and more detailed… I still have a lot of work, but now I think I am on the good path!

Terrain rendering part 1: procedural generation

LibNoise, my dear LibNoise

The game will use a huge amount of terrain, something like 1/40000 of the Earth surface. Surprisingly, Tim has politely refused to model all the landscape by hand… By chance, I’ve used the very powerful libNoise library for another project in the past. After a few googlings, I found an excellent port of libNoise for XNA, called libNoise.XNA that totally fits my purposes for now. The 2 screenshots illustrate the magical world of coherent noise generators.

Voronoï noise generator

Ridged multifractal noise generator

Later, we will combine multiple noise generators and operators to generate, on demand, the game landscape. The more observant among you surely have noticed the teapot and the various displayed helpers… A good graphical project never starts without a teapot, so while I’m waiting for Tim to complete the player model, I use a teapot instead. I use the helpers to debug collision engine, but that is another story…
Return top