Use advanced navigation for a better experience.
You can quickly scroll through posts by pressing the above keyboard keys. Now press the button in right corner to close this window.

AI for The Hum (Utility AI, and some other things)

 

(this article is unfinished and still a Work in Progress and I’ll update it with new info and feedback)

Howdly earthlings,

After being working during a few weeks in some other areas of the game, I finally put hands on the enemies AI , in particular the tall Red Alien. But, before doing any code, I needed to do an overview of what architecture I wanted to implement for the artificial intelligence.

Unreal has a very nice Behavior Tree (BTs) solution. For starters, Behavior Trees can be thought (maybe?) as State Machines but with a tree hierarchy. Basically, you have a Tree where each node can be a either composition with rules (hence the root of a subtree/branch) or a task. Tasks are leaf nodes that execute some code in the tree owning AI agent. So, depending on how the conditions are met, a different node task will be running (or taskS if you have parallel executing nodes).

Another concept of Behavior Trees is the BlackBoard (BB). This is just a collection of properties that can be accesed tipically just from within the tree nodes, but it depends on the implementation. You can think of the BB as the information the brain can access and modify in order to do its logic.

There no such a thing as THE way of doing Behavior Trees and you will find many different implementations. Unreal has its own one and it adopts some decisions that I really like, such as an event driven approach (so you don’t evaluate the tree constantly) and consequences of this aproach like Services and Observers. You also have Decorators to evaluate conditions, so you can decouple the “can run” check from the task nodes.

If you are more interested in how Unreal Engine implements Behavior Trees, you can check the official documentation, which is very well explained.

But I decided not to use Behavior Trees..

 

BTs were and are a very popular architecture for AI in games. Compared with some other architectures they are easy to understand and they work well. However, they have many caviats that started to become more aparent in the last years with the need of more complex behavioral AIs.

To summarize some of them, I can mention:

  • Messy and complex Trees when logic grows over time (if you try to do complex behaviors)
  • Rigid structure / Lack of Flexibility
  • Hard to Mantain (in complex behaviors), and specially if you want to introduce logic changes.
  • Not natural behavior in many situations in which you want some adaptability or fuzzyness from the agent. Agents can be predictable and behave in a deterministic way.

 

But.. what are the alternatives?

 

GOAP (Goal-Oriented Action Planning) is another architecture that has been popular in the last 10 to 15 years. It was used in games like FEAR, Shadow of Mordor, Rise of Tomb Raider and others.  GOAP checks different goals and plans for a better schedule of actions to achieve the most reasonable goal at the moment.   I always found GOAP to be complex to mantain and design, with many rules and plans needed to be configured and mantained. Some people say that the developer is always baby sitting in some way the GOAP architecture. While that can be ok for more bigger and dedicated teams, I don’t want to deal with that.

A trendy way of thinking AI in the last few years is Utility AI. It was used in XCOM : Enemy Unknown, Kill Zone 2, Guild Wars 2 and some others. Utility AI checks for a list of potential actions, and then scores them using utility functions, and selects the most situable to be performed (not necessarily in a deterministic way though, it can select using probabilities to create a more “natural” feeling in decision making).

For a matter of tastes and personal experience, I decided to go with Utility AI for The Hum. I wanted a flexible and powerful architecture for my Aliens, but still something I feel comfortable to work with and mantain.

Am I discarding Behavior Trees at all? Not really, I might use it for simplier agents like wandering animals or similar.

Also, I might end up using Smart Objects that inject their own Behavior Trees into the Alien Logic. I’ll explain this a little further below.

The remaining question then was.. how do I implement it? Should I use a custom created AI tool, should I buy/get some third party?

 

garra exploradora

 

Utility AI in Unreal Engine 4

 

When I was thinking about how to implement the UtilityAI in UE I basically had two routes in mind:

a) Create a tool from scratch. This would required to create some visual tools aswell, because, trust me, you want to have visual tools to design the AI, specially when it becomes bigger and bigger. This option was the less appealing for me.
b) Buy some existing tool, hoping it’s not too limited and it’s well developed/mantained.
c) Build it on top of some existing tool. With this options in mind, my goal was to do the less work as possible creating the AI architecture, and be able to put instead most of my effort in desiging the AI and its behavioral logic.

First thing I did was to search “Utility AI” in the Unreal Marketplace. Sadly, there were zero results. I embarked then in the journey of creating my own tool/architecture.

A few months ago, when I was messing around with a weird attemp of doing The Hum in  Unity, I found this Utility AI Plugin in the Asset Store. I found it in overall nice, but limiting in some ways in my oppinion. The design window was okay and it has a philoposhy of hierarchical selectors similar to, as you might already be guessed, a tree.

Some stuff I missed on that tool’s implementation and I would have extended if I could (but I coudn’t because it wasn’t open source), were the ability of trigger state changes by events and the ability of having Tasks (or Actions) that don’t execute like a procedure but more like a coroutine (or a state), meaning basically that they could take an arbitrary amount of time to complete instead of being closured whitin a frame.

A good news is that UE’s Behavior Trees work in the way I wanted. Tasks are thought as states more than just a function, and they don’t exit until you don’t explicity call them to exit with a result (sucess/fail). When that happens, the tree is revaluated. As I mentioned above, UE’s BTs also implement an event driven approach.

Thanks to the way BTs are made in UE, I thought that I could create custom nodes where a Composite node would act as an Utility Selector and Decorators as heuristic utility functions. Utility AIs usually have a concept of “context“, but BTs Blackboard and the context of a BT are more than enough for what I need. Rechecking videos from Dave Mark (probably the most relevant voice in UtilityAI in gaming industry), I found that he mentioned different ways of using Utility that included extended BehaviorTrees, so I might have had this idea resounding in my mind from him. Thanks to this, I started thinking that doing a full tool from scratch wasn’t necessary and that made me happier.

My next challenge was to learn how to extend tools in Unreal Engine. I’m proficient at extending Unity’s Editor, but have not much experience doing so it in Unreal. I started googling “how to create custom plugins in Unreal Engine”, “create custom Behavior Trees”, etc when I found that someone already had created an UtilityAI plugin built on top of Behavior Trees!
You can imagine my happiness to discovered that I was cutting lots of hours in my effort. The plugin is this one .

After taking a look at how it was implemented, I then decided that it was a good starting point and that, actually, the implementation was pretty simple. The plugin served perfectly as a base but I still had to do my own nodes for utility functions, add some additional selection methods, modify some logic to fit better what I wanted to achieve and so on. I have to credit to Cameron Angus (kamrann) because his/her code taught me how to create a plugin and extend some classes that I wasn’t sure how to and gave me a good base to work with.

Another thing I wanted to support and had to implement was Dave Mark’s idea of IAUS (Infinite Axis Utility System), which provides a way of normalize different score values for a single task and take all them into consideration.  You can check THIS GDC TALK (after minute 33) to check the concept explained by Mark himself.

Test example (not real Alien AI tree), using Infinite Axises of Utility Scoring

 

 

My main difference with what the talk showcases is that I don’t use any formula to draw the curves, but instead just create them “by hand” using Unreal’s UCurveFloats  and and I assign them to the utility functions (or axises).

I also had to do a good number of modifications to support grouping the utility functions and caching values to be accessible outside the behavior tree (for debuggin and some other uses).

Another addition I had to do is to build some Visual Debug Tools for Utility based agents. My approach was simple, and I just created some UI widgets that are added to the Viewport when an AIController is marked as “debug” state. Since Unreal already has some very nice tools for Behavior Trees (and AI) debugging, I was only needing the additional utility logic to be debugged and displayed.

 

What else?

 

Interest / Keennes

Utility AI uses different ways  to collect information and states to be used as material to take a decision. One of these datas is the “interest” our agents have in doing something. I’m currently working on this system. Interests are usually just values that go from 0 to 1 (or empty to full) in a certain amount of time, either increasing or decreasing. Example is “hunger”, “thirst”, “sleepyness”, “boredom”, etc.

It’s common to use different curves to lerp through the interest value’s min and max. By using smart chosen curves, you can model more natural behaviors that will then be reflected in a matematical model. For instance, you can use an exponential curve to model the hunger progression. At the beginning, if you just have had a food, your hunger will be increasing slowly. But more close you are to starving, more exponentially higher is your need.

Another thing I want to add is a medium term brain memory system, that will modify different interest values in diverse ways depending on the last events the agent faced. You can think this as buffs or debuffs for some interests. The memory will decay over time so if nothing significant happens to the alien, he will go back to the default interest state.
However, these events are not just reactive to player actions but the whole world. Aliens communicate with each other, they have a chain of command and the world also evolves and makes them behave and react different.

Smart Objects

 

As I mentioned above, I might use Smart Objects with the aliens (or Human NPCs). SmartObject is a technique that is used in games like The Sims and many others. Basically, if the AI will decide if it has to interact with some specific object but, instead of having itself the whole logic of how to do that interaction, the object will be responsible of holding the Behavior Tree (or whatever structure you use) that handles that logic. Once the interaction was requested to be started, the smart object will inject its structure into the requesting agent and it will be executed.

 

Social Thinking / Chain of Command

 

For many decisions, AIs need to think like a group. Let’s say, the aliens have a goal of capturing some number or wandering animals or a sector need to be in alert because one alien detected a human presence.

For these cases, it’s a good idea to have an upper level of intelligence that decides how to “command” to the individual AIs. This upper level AI can also use Utility AI, Behavior Trees, etc, since, it’s not more than other agent that needs to take decisions.

 

Influence Maps

 

Example of how I want to implement Influence Map Debug Visualization. Credits to zoombapup (go and check his youtube channel, he's doing some really nice stuff in Unreal)

Another thing I want to implement is Influence Maps so the aliens can feed from them to make decisions. If you are not familiar with the concept, they are basically grids that extend in a distance and contain “heat” weights for different values like “danger’, “interest”, “friends”, “food”, etc. This maps are usually casted over the navigation mesh (or any navigation graph you use) so you can reflect the information into potential target movement points.

Influence Maps can have many layers as you want to messure different aspects, but since they are grids that can extend by lots depending on the desired logic, you need to be careful with performance considerations. For instance, you can reduce your resolution (make cells wider) to minimize the iteration time.

Different actors placed in the world will also modify the influence for their given maps, acting as influence casters. For instance, if you have a “friend help chances” level, each of your team mates will cast a possitive influence around them. If some of them overlap, the “heat” will be even bigger, so that zone is a hot spot of friendship, love and protection.
A good new is that I have already the base of an Influence Maps solution half implemented. However, it has some optimizations I have to do before considering it really usable. Also, I want to re do how the debug visualization is done: I’m currently using instanced plane meshes created by casting on the navmesh and, while it’s not terrible, I would like to give a try to use the visual debug tools that Unreal provides (same that are used, for instance, for EQS, etc).

Speaking of EQS.. If you are not familiar with EQS (Enviroment Query System), we are talking about a very nice tool Unreal provides to query information in your level (enviroment). It basically casts a grid (similar to the influence map) and each node will do some Test (for instance, visibility to an enemy, accesiblity, collision with actors, etc) and receive an score. Then the system will select a node with some selection method based on the scored nodes. The philosphy is similar to what I described in Influence Maps, however different, so I’m considering extending EQS to create queries that can affect to some of the layers of my maps.

 

Some other challenges…

 

As quick comment of some challenges I’ll approach in the future:

  • Hierarchical Chain of Command that can propagate instructions with some weight value. Basically what I described in “Social Thinking / Chain of Command”.
  • Not planar pathfinding for spaceships and drones. I guess I’ll also need some kind of volumetric influence maps, but I’m sure I’ll keep them very low res, at least in the vertical axis.
  • Linking in navmeshes to make aliens be able to move among not strictly connected points
  • AI for human survivors. I wasn’t planning to put human npcs, but more I think about the idea, more I like it.
  • Procedural animations to go in parallel to the behavioral aspect. The only full body IK solution I can find affordable seems to be iKinema’s indie version. If I don’t find it suitable, I decided I’ll build my own full body IK solution for the game.
  • Some navigation mesh recast techniques (per individual) to avoid to evaluate full navmeshes all the time. While I plan to separate the full map into sectors, these still can be considerable big.
  • Machine learning? I was thikning about using Neural Networks to train some AIs, but I’m not sure about this yet. However, since I plan to use a backend (not for real time multiplayer but for analytics and asynchronicous players interaction), I might end up thinking about some way of using all that data to train evolutive agents. These agents might not be the aliens, but probably the “Super Mind” that sits on the top of the hierarchy chain of command.
  • Indoor vs Outdoors. This is a Game Design decision I have to take in order to design better the AI architecture. I was thinking the easier way to do this is to make a “loading” transition between outdoors and indoors, but I’m scared this will break the feelling of the game down. Still a pending design in here.

 

 

Final Words

 

In summary, I’m really getting fun with all this AI process for The Hum. I hope this post was interested for you! Will keep you updated with my progress and upload some more visual or animated examples of what I’m talking about in this post.
Dont’ forget to share the world about the game, follow it on the social links (check in the sidebar to the right) and give a look to The Hum’s Patreon page.

Ariel

Leave a reply

 

small_keyboard