top of page
scottdagen

AI Architecture in "The Exaggerated Epoch of Edward O'Hare"

All enemies in The Exaggerated Epoch of Edward O’Hare contain an AI Brain script, a health script, a movement component, a perception component that tracks the player, and a state machine that holds multiple behaviors. Every frame, the AI Brain checks whether it's currently in a stunned or knockback state, and, if it is in neither of those states, tells the movement component, perception component, and state machine to update.

UML diagram of AI Brain and all required components. A Particle System and Audio Controller are optional.

Initial State Machine Setup


When I joined the team, the AI State Machine component supported three states at a time: an Idle state, a Chase state, and one of two Attack states. This was sufficient for the enemies that existed at the time, but if any additional complexity was desired, the architecture needed an overhaul:


//Author: Conner Root
public void Init(AIMoveScript mover, AIPerceptionScript seer, float walkSpeed, float sprintSpeed, float viewDist)
{
	if (!inited)
	{
		inited = true;

		// set the components
		moveComp = mover;
		percComp = seer;

		// Add new states
		idleState = gameObject.AddComponent<State_Idle>();
		idleState.Init(moveComp, percComp, changeTime, walkSpeed);

		chaseState = gameObject.AddComponent<State_Chase>();
		chaseState.Init(moveComp, percComp, sprintSpeed, viewDist, attackDist);

		attkState = GetComponent<State_Attack>();
		attkState.Init(moveComp, percComp, sprintSpeed, attackDist, attackSpeed);

		// set curr state
		currState = idleState;
		currState.OnStateEnter();
	}
}

This initialization function requires all of the idle and chase variables to be stored within the AI Brain instead of the state itself, and the variable name attackDist is ambiguously used as both the distance before switching into the attack state (as used in State_Chase) and the range from which damage can be dealt (as used in State_Attack).


This setup also has some additional limitations. In the above implementation, an AI can't have any states that aren't derived from Idle, Chase, or Attack, nor can it have more than one state derived from a given state type. An enemy wouldn't be able to have a chase state and two attack states, for example. New state types could be added along with new variables to hold them, but this is a bad solution. This would require every enemy to have every kind of state to guarantee that an enemy couldn't be told to transition to a state it doesn't have.


The New State Machine


One of the first planned additions to the game was a boss fight against an immobile dragon with multiple attacks. Even before any of its attacks were determined, I recognized that the then-current setup would be insufficient. We needed an implementation that wouldn't need periodic rewrites to account for new state types being added and could handle both single- and multi-attack enemies. To keep track of the machine's states, I settled on using a Dictionary mapping a StateName onto a List of States with that StateName:


//Author: Scott Dagen
public void Init(AIMovement mover, AIPerception seer, AudioController audioController)
{
    if (!_inited)
    {
        _moveComp = mover;
        _percComp = seer;
        
        //...
        
        //get all states
        State[] states = GetComponents<State>();
        
        for (int i = 0; i < states.Length; i++)
        {
            //if stateStorage (Dictionary<StateName, List<State>) doesn't have this StateName yet, create a new List
            if (!_stateStorage.ContainsKey(states[i].stateName))
            {
                _stateStorage.Add(states[i].stateName, new List<State>());
            }
            _stateStorage[states[i].stateName].Add(states[i]);
            states[i].Init(_moveComp, _percComp, audioController);
        }
        if (!_stateStorage.TryGetValue(startStateName, out List<State> options))
        {
            Debug.LogError($"No state with tag {startStateName}");
            return;
        }
        _currState = options[Mathf.Min(preferredStartIndex, options.Count - 1)];
        _currState.OnStateEnter();
        _inited = true;
        }
    }

This new system both allows for more flexibility when assigning states and moves all data relevant to a state into that script instead of being passed down through a series of functions. Additionally, the designer can specify which state the enemy enters first.


State Example: Hunt


The Hunt state was developed after the game initially released, when we received feedback from players that the Shield Rat enemy was too easy to run past. Its original attack calculated a trajectory towards the player's position at the time the state was entered and moved the enemy towards and through it, but if the player had moved from that point there was little to no risk of being hit if they weren't actively fighting the Shield Rat already.


A Shield Rat in its natural habitat.

The new Hunt state takes a very different approach, continuously seeking the player at a speed moderately above the player's speed. It also takes the player's velocity into account and attempts to intercept, encouraging the player to adopt a wait-and-dodge pattern until the attack ends, before retaliating with attacks of their own.


The math behind the intercept algorithm has two parts: finding the interception point and path smoothing. To calculate where the Shield Rat could intercept the player's movement, I followed the steps provided in a 2017 GDC talk by Chris Stark of Robot Entertainment (slides). This algorithm finds up to two overlap points between a circle representing all of the AI agent's possible positions at time t and a vector from the player's position representing their new position at time t moving at their current velocity.


//Author: Scott Dagen
private void LeadTarget()
{
    if (_targetRb == null)
    {
        throw new Exception("No Target Found!");
    }
    //...
    //notes about the derivation of a, b, and c
    //...
    
    //form at^2 + bt + c = 0
    //velocity has an additional weight modifier in my implementation. 
    // (player_v * weight).sqrMag - ai_v.sqrMag
    float a = _targetRb.velocity.sqrMagnitude * targetVelWeight * targetVelWeight - attackSpeed * attackSpeed;
    
    // 2 * dot(ai_pos -> player_pos, player_v * weight)
    float b = 2 * Vector2.Dot(_targetRb.transform.position - transform.position, _targetRb.velocity * targetVelWeight);
    
    // (ai_pos -> player_pos).sqrMag
    float c = (_targetRb.transform.position - transform.position).sqrMagnitude;

    float t = -1;
    if (Mathf.Abs(a) <= 0.05) //if a is roughly 0, solve bt + c = 0
    {
        t = -c / b;
        interceptPos = _targetRb.transform.position + _targetRb.velocity * targetVelWeight * t;
    }
    else
    {
        float root = b * b - 4 * a * c;
        if (root < 0)
        {
            Debug.LogWarning("No intercept route found, defaulting to player position");
            interceptPos = _targetRb.transform.position;
            interceptPos.y = transform.position.y;
        }
        else if (root == 0) //pretty rare, requires |b| = 2sqrt(ac)
        {
            t = -b / (2 * a);
        }
        else
        {
            float sqrt = Mathf.Sqrt(root);
            float tMinus = (-b - sqrt)/ (2 * a);
            float tPlus = (-b + sqrt) / (2 * a);
            //take the smallest positive value.
            t = tMinus > 0 ? tMinus : tPlus;
        }
        //apply back to original equation. Weight still matters.
        interceptPos = _targetRb.transform.position + _targetRb.velocity * targetVelWeight * t;
    }
}

The target velocity was given a weight modifier so the Shield Rat's accuracy could be tweaked. A lower value (less than one) would make it seek closer to the player's current position, and a higher value (above one) would make the rat more prone to overshooting.


However, this algorithm wasn't sufficient for calculating the rat's movement destination, as the rat's target could jump wildly if the player reversed direction. I needed to interpolate between the rat's current target at a given frame and the interception point, allowing it to smoothly shift to the new location over a few frames. This led to an interesting problem though: a linear interpolation between points a and b by a changing value u requires a and b to be static, known points. Because a (the enemy's position) and b (the interception point) change every frame, a different approach needed to be taken. The easy approach was to reassign a's value each frame:

This presented another question: what value should u have? This is no longer a proper linear interpolation, so it's hard to estimate how long the interpolation would take to reach a desirable threshold given a value for u. 0.5 would halve the distance each time, for example, and 0.3 would shorten the distance by a lower amount, but I couldn't reasonably ask what value to pick if I wanted the interpolation to take 1 second. To try to find a solution, I started writing out the results of iterative linear interpolation:

Future iterations will be denoted as Lerp^n(a,b,u)

I continued out to Lerp^3 and Lerp^4, looking for a pattern:

At this point, I realized that this looked very close to being two binomial expansions added to each other. After a little rearranging, I reached this set of equations:

I now had an equation that would give me the output of an interpolation between a and b with an interpolation value of u, iterated n times. From there, I wanted to create an equation that would yield a value of u that will give me a specified output for Lerp^n. N was easily calculable, as it's equal to the framerate rounded up times the desired duration (in this case, 1.0f/Time.fixedDeltaTime for the framerate and 1 second for the duration yields N = 50).

The u equation is as follows:

From there, it's pretty simple to plug in values. N is defined above, a is our current location, b is the intercept location, and g is 95% of the way between a and b. In practice, g is calculated using the value 0.95 for a float-based iterated interpolation between 0 and 1. Additionally, g is always between a and b, so if a-b is negative, g-b is also negative, and the same holds for if a-b is positive. This means that the value under the root is always positive, guaranteeing that even values of n yield a real result.


These two functions are repeatedly used in sequence to ensure that the rat is always moving towards where it thinks the player will be can't immediately react to player actions.


The Hunt state is one of over twenty AI-related scripts that I either overhauled and optimized or implemented from scratch. Each one posed its own challenges, including managing audio cues, changing material variables to animate a projectile, and knocking back both enemies and the player, and they all required me to either expand my knowledge or apply it in ways I hadn't before.



22 views0 comments

Comments


Commenting has been turned off.
bottom of page