• Top-down combat encounter in 2D with AI-agents coordinating a defence using a self-made Behaviour Tree.

  • 4 weeks half time.

  • Made with inhouse engine TGA2D (C++).

  • Custom navmesh export from Unity.

  • A* - pathfinding with Simple Stupid Funnel Algorithm.

  • Ticket system for team coordination between agents.

  • Polling System and raycasting to interface with the world.

  • A chat log in the console window where the agents could post "barks" (ie short, immersive comments about what they are doing).

A part of the behaviour tree of the ranged agent. This was the most complex tree since it contained a lot of actions that involed team coordination (providing backup, healing each other etc).


For my specialisation at The Game Assembly in Malmö, I chose to implement a combat encounter which could potentially be used in some kind of tactical, real-time shooter. The goal was to have a set of AI agents that protected an area from an intruder (controlled by the player with keyboard + mouse). For this I used the school's inhouse 2D engine TGA2D (C++) since it's quite barebones and only contain the essentials such as rendering, audio playback etc. I first thought about doing everything in Unity but since I've already done my Bachelor's project thesis in Unity I decided to try something else.

I already had some tools created from before so I began with creating navmesh and basic level structure such as spawn points and covered positions in Unity. I exported this and imported everything into TGA2D. From this, I could use my A* - implementation from before to make the agents traverse the level. This resulted in far too unnatural paths so I also used the Simple Stupid Funnel Algorithm to make things smoother. Alongside this I also implemented a Behaviour Tree entirely in one .hpp-file to hopefully make things faster. I tried to make it as generic as possible with no dependencies on the actual game. Relevant parameters were saved in an internal "Blackboard" which could be accessed by the agent.

Problems with the navmesh

By this point I realised that something seemed a bit off with the pathfinding. The AI didn't always choose the very best paths and instead made som questionable choices... Nothing major but it was enough to annoy me. To combat this, I implemented a path smoothing post A* + funnelling algorithm using simple raycasts to check if there are points in the path which are unnecessary (a bit similar to the Theta* - algorithm but made in post). This helped a bit so I drew the conclusion that the problem probably ultimately was in the export of the navmesh from Unity. The navmesh wasn't the most beautiful one I've ever seen with really small slivers and weird wedges. Sadly, the navmesh couldn't be manipulated in Unity so I was left with the rather unsatisfactory navmesh (as can be seen in the pictures above). Since I only had 2,5 weeks to go, I decided to let that be good enough and focus on what was the important thing: the actual Behaviour Tree.

This is where I create the behaviour tree of the sniper agent. This corresponds to a flowchart.

I kept all information that was used at compile-time and by multiple actors as const expressions in one single file (Defines.hpp). I think it's clean and easy to access. It's also good practice to keep it as const expressions, in my opinion.

The implementation of the abstract parent Node class. All nodes in the Behaviour Tree derive from this class which ensures that the flow is consistent in the tree and new nodes can easily be created.

Some gameplay of the level. The player needs to reach the red and gray square and the enemies try to stop him. We can see Fred trying to lean in and out of cover and Rachel being desperate to find a covered position. The sniper decides to secretly slip away and seek out the closest watchtower (the green blob) where he can shoot the player "above" the colliders.

The Behaviour Tree

To implement the Behaviour Tree I mostly used pseudo code from Artificial Intelligence for Games, 2nd Edition (2009, Ian Millington & John Funge) and some other implementations such as BrainTree (https://github.com/arvidsson/BrainTree).

This turned out to be easier than expected and this freed up more time for me to make the flow feel more natural and organic. To keep track of the flow, I naturally represented it in a flowchart which can be seen in the pictures.

Nodes in the tree all have an Init and an Update function. The Init sets everything up and the Update updates its behaviour and then returns one of four possible statuses:

  • Success: Whatever the node was updating resulted in a success, e.g scanning for an enemy or using a health kit.

  • Failure: The node's update resulted in a failure, e.g firing at the player, trying to perform all actions in a sequence.

  • Running: The node should still be visited since it is not done yet in this frame. This is a clear similarity with the Finite State Machine where the flow stays in a state until a transition is possible.

  • Invalid: Something went wrong or wasn't properly initialised. Usually a bad thing.

The Behaviour Tree implements the standard nodes as base classes:

  • Composite: These nodes are responsible for the flow in the tree. "Sequencers" for instance, iterates over its children until one of them does not return "Success" while "Selectors" does the opposite: iterates over its children until one returns "Success".

  • Leaf: Contains an actual action. These should be inherited and implemented by the game to contain actual, game-specific actions such as healing, firing at an enemy or reloading a gun. These nodes never have any children.

  • Decorators: These nodes have only one child and the decorator "decorates" this child (similar to the Decorator - pattern). For instance, this could be an "Inverter" that inverts the return of its child or a "Repeater" that repeats its child until a set amount of iterations has been reached.

  • Conditions: Just like they sound: these nodes checks wheather a condition is true or false and returns that in the form of a status (Success or Failure). This can be used to control the flow in a composite for instance.

The final product

In the end I ended up with three types of enemies: Ranged, Sniper and Melee. I put least focus on the Melee and since I didn't really have the time to balance it too I only used the Ranged and the Sniper in the end. The actors used basic raycasts to check wheather they had a line of sight to the player or if a position was covered from fire. It's possible for the agents to find positions to lean in/out entirely based on where colliders on the map are but a designer can also add Points of Interest-points as lean in/out locations (seen in the picture to the right). The Ranged agents also cooperate in three ways:

  1. If someone has spotted the player -> help out.

  2. If someone is wounded -> heal that agent.

  3. If someone needs to reload -> provide backup.

This was achieved using a simple ticket system where an agent added a ticket with its request. Then, if any other agent prioritized to help out, it could fetch the most important ticket. The agents also used a Polling Station to interface with the world (Where is the player? Where is the closest watchtower? Where can I find the best covered location? etc). Covered locations were found by casting rays in 8 directions around the agent. The closest found place not in sight of the player was then chosen. There is also support to add more types of Points of Interest such as adventageous positions for the sniper etc.

Some final thoughts

Since we only had 4 weeks to develop this (while simultaneously working on a game project half-time) some features had to be left out. For instance, the level loading system was built to be really generic and be able to load in any map exported from Unity if it contained the correct components. But I only allowed for one single map to be chosen at a time. It would be more exciting with an option to choose between different maps and test out the AI across different conditions.

Another thing I'd really like to do but which would probably take too much time is to allow the user to create a behaviour in a flowchart tool which would then export the behaviour tree to a .xml-file. This could then be read by the program and attached as behaviour to an agent. Most of this functionality already exists in the game since I created a way to easily structure the behaviour tree very similar to an .xml-file.

In the end I also think that this would work out better on a turn-based strategic game with tiles instead of a navmesh. I never got the navmesh to look really good and I think that would make the paths a bit better with the funnelling.

Some final thoughts

Despite not having the time I'd like to implement everything I wanted, I managed to stay focused on what was most prioritized: a robust and easy-to-use implementation of a Behaviour Tree. The gameplay feels challenging but it's easy to change the parameters of the agents to make them react slower, reload slower and take "more stupid" decisions. What matters in the end is that it's fun to play and doesn't feel unfair to the player.

When Eren is out of ammo he'll try to find some cover before reloading. He'll also request backup from his friends using a ticket system.

Bobby is a smart AI. He tries to trick the player by transitioning in and out of cover. Sadly, neither his aiming nor his reaction time is top notch (this can be tweaked easily but don't tell Bobby I said that!)

Here, the sniper decides to move to the closest watchtower (the creen blob) where he can shoot "above" the colliders. The player cannot fire through them and instead has to flank the Sniper. Clever!

In order to keep track of the agents, I printed their information to the left of the screen. This was very useful when I was debugging them and if they expressed strange behaviours.

Importing the level from Unity using cusom binary files. This is also where the navmesh is created and exported.

I use a Spritebatch in order to allow faster rendering and less memory usage.

The funnel algorithm. When a path has been found using A* we then clean up the path by using funnels. If lines in the funnel cross or if the funnel gets wider in the next iteration, then we have found an essential point.We then start from that location and iterate until we are done with the path. The unused nodes are discarded.