Pathfinding in an infinite world

One of the most important aspects in a strategy game lies in its pathfinding algorithm. This determines how the game’s entities move around in the world, which greatly affects the feel of the game.
As a player, you want the entities in the world to feel like an extension of yourself, and to watch your villagers struggle to carry out basic tasks due to a poor pathfinding algorithm is a frustrating experience.

I prototyped the pathfinding system using A* - this is the canonical pathfinding algorithm, which unfortunately isn't a viable solution in a game like Trollskog, no matter how much you dress it up, A* ends up having to explore huge areas of the world to provide search results, and we can’t be waiting around longer than 100ms for a path request without it negatively impacting the game experience.

The first serious attempt was Jump Point Search (JPS). In the same way that A* is an optimized variant of Dijkstras that assumes all edges have a non-negative length, JPS is an optimized variant of A* that assumes the traversed graph is a grid. I first encountered the algorithm in this whitepaper by Daniel Harabor and Alban Grastien.
Since the terrain in Trollskog certainly is a grid, the algorithm seemed like a good fit.

While JPS was orders of magnitude faster than A*, it wasn’t the performant silver bullet I’d hoped for, and it didn’t help at all with steering behaviors - A single entity following a path worked fine, but having a large group of entities all navigating together, bumping into each other and other entities along the way resulted in a lot of edge-cases, sometimes requiring path recalculation. I kept piling on band-aids for a while, trying to fix new strange behaviors as they arose, but eventually made the decision to try another algorithm.


The lit area indicates a route calculated by the JPS pathfinder. The yellow squares indicate jump points. Path-smoothing is applied to the route, as JPS tends to generate straight routes that take as few 45 degree turns as possible.

Some research into my problems led me to flowfields. These are great because they practically solve pathfinding and steering using the same algorithm. This GameAIPro chapter by Elijah Emerson is still the best writeup I’ve found on the subject.

Flowfields work quite differently from graph-traversal algorithms like A* and its variants. To summarize, it works by calculating a vector to the goal from each tile on the map, then each entity just has to follow the average vector of the four tiles around it to find the goal. This makes the algorithm scale very well with multiple entities;
Filling the entire world with vectors for every move command would be crazy inefficient, so instead we break the world up into sectors, and only fill the sectors that contain moving entities.


Here we see a 64x64 chunk of Trollskog’s world, divided into 8x8 sectors.

These sectors play two roles, one as a convenient bound for our vectorfield floodfill (to avoid having to fill the entire game world with vectors) and also for the first-pass hierarchical pathfinding. We can let the pathfinding system do an initial, broad-strokes search by traversing whole sectors at a time before we run a finer search on a tile-level, letting us break up a potentially large request into many smaller ones.

We need to keep these pathfinding sectors up-to-date by recalculating them, individually, whenever an entity that blocks a tile is placed or removed. And naturally, we want to generate these pathfinding regions as late in the terrain generation process as possible - An average 64x64 chunk of terrain in Trollskog contains about 2000 trees, and keeping the pathfinding regions up-to-date during the generation process while those trees are being placed would be a pointless waste of resources.

The advantages to this approach were immediately apparent. In the worst-case, A* and JPS explored most of the world before concluding that, yes, this unreachable spot surrounded by trees is, in fact, unreachable. The hierarchical flowfield approach can often rule out destinations which are not connected to any portal instantly, or at worst rule them out during the initial portal traversal stage.


Above, two entities are pathfinding between two sectors, with debug display enabled. Here is a larger group with a lower individual collision radius.

The result works a lot better than the previous (JPS) implementation, with a lot of tweakable variables (like collision radius, repelling force, etc) that I can play around with to make the end result look and feel the way I want.

Night falls on the forest

The biggest inspiration for Trollskog, at least where the visual style is concerned, is the work of swedish artist John Bauer, whose work has informed much of the popular conception of swedish folklore.

The past few weeks, I’ve been thinking about the mood of John Bauers works, and the technology needed to try and convey it in the medium of a game.
A common element in his paintings is darkness. Whether set in a cave or a deep forest at twilight, the tales depicted are filled with mystery and wonder, and this feeling simply isn’t communicated effectively in bright, sunlit environments.

As counterintuitive as it sounds, to depict a scene at night convincingly requires more lighting than depicting the same scene during daytime. This is because there’s only one light source that matters during the day: the sun. You might think that transforming day to night is as simple as dimming the lights, but there are a lot of light sources that make themselves known when they’re no longer being drowned out by the sun. Especially in the work of John Bauer, there are many light sources of varying subtlety; important characters and objects light up the environment and create a feeling of magic.

The graphics engine of Trollskog was not equipped to deal with multiple light sources - a traditional “forward renderer”, every new light added to the scene increased the complexity: the scene needed to be redrawn for every light source, which quickly led to performance issues when rendering detailed environments. Fortunately, the technology to achieve high performance with many light sources has been around for a while, in the form of a “deferred renderer”.

There are a lot of online resources that describe this rendering technique, so I won’t dwell on the implementation details here. But here are three pictures that show the core concept: the geometry buffer:


Above: The screen-space normals - Here, we have normals for every visible surface. That is, the X, Y and Z component of the vector that points away from the surface, encoded as R, G, B colors.


Above: The albedo map - Simply, the color (or albedo) of each visible surface, as yet unaffected by lighting.


Above: The lighting map - Using the above two images, and the depth buffer, we can generate the light map independent of scene complexity.


And the final result - the combination of the above.

Finally, you can see the dynamic lighting effects in motion here: