<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" >

<channel><title><![CDATA[SCOTT DAGEN - Blog]]></title><link><![CDATA[http://www.scottdagenprogrammer.com/blog]]></link><description><![CDATA[Blog]]></description><pubDate>Sun, 15 Feb 2026 06:24:52 -0500</pubDate><generator>Weebly</generator><item><title><![CDATA[AI Architecture in "The Exaggerated Epoch of Edward O'Hare"]]></title><link><![CDATA[http://www.scottdagenprogrammer.com/blog/teeeo-ai]]></link><comments><![CDATA[http://www.scottdagenprogrammer.com/blog/teeeo-ai#comments]]></comments><pubDate>Mon, 12 Jul 2021 07:00:00 GMT</pubDate><category><![CDATA[Edward O'Hare]]></category><guid isPermaLink="false">http://www.scottdagenprogrammer.com/blog/teeeo-ai</guid><description><![CDATA[ All enemies in The Exaggerated Epoch of Edward O’Hare contain an AI Brain script, a health script, a movement component, a perception component that tracks the player, and a state machine that holds multiple behaviors. Every frame, the AI Brain checks whether it's currently in a stunned or knockback state, and, if it is in neither of those states, tells the movement component, perception component, and state machine to update.UML diagram of AI Brain and all required components. A Particle S [...] ]]></description><content:encoded><![CDATA[<div><div id="498988613132791883" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><p>&emsp;All enemies in <i>The Exaggerated Epoch of Edward O&rsquo;Hare</i> contain an AI Brain script, a health script, a movement component, a perception component that tracks the player, and a state machine that holds multiple behaviors. Every frame, the AI Brain checks whether it's currently in a stunned or knockback state, and, if it is in neither of those states, tells the movement component, perception component, and state machine to update.</p></div></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="http://www.scottdagenprogrammer.com/uploads/1/5/4/6/154638382/ai-brain-uml_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">UML diagram of AI Brain and all required components. A Particle System and Audio Controller are optional.</div></div></div><div><!--BLOG_SUMMARY_END--></div><h2 class="wsite-content-title"><span><span>Initial State Machine Setup</span></span></h2><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; When I joined the team, the AI State Machine component supported three states at a time: an Idle state, a Chase state, and one of two Attack states. This was sufficient for the enemies that existed at the time, but if any additional complexity was desired, the architecture needed an overhaul:</span></span><br></div><div id="737353880741746022"><div><div id="element-83b2f7f8-1668-491b-8dd8-0c5eee66ec69" data-platform-element-id="270170748587580171-1.3.3" class="platform-element-contents"><div class="code-editor--dark"><div class="header"><div class="paragraph">Original State Machine</div></div><div class="body-code"><pre class="editor"></pre></div></div></div><div style="clear:both;"></div></div></div><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; This initialization function requires all of the idle and chase variables to be stored within the AI Brain instead of the state itself, and the variable name</span> <em><span>attackDist</span></em> <span>is ambiguously used as both the distance before switching into the attack state (as used in State_Chase) and the range from which damage can be dealt (as used in State_Attack).</span></span><br><span><span>&nbsp;&nbsp;&nbsp; This setup also has some additional limitations. In the above implementation, an AI can't have any states that aren't derived from Idle, Chase, or Attack, nor can it have more than one state derived from a given state type. An enemy wouldn't be able to have a chase state and two attack states, for example. New state types</span> <em><span>could</span></em> <span>be added along with new variables to hold them, but this is a bad solution. This would require every enemy to have</span> <em><span>every</span></em> <span>kind of state to guarantee that an enemy couldn't be told to transition to a state it doesn't have.</span></span><br><br></div><h2 class="wsite-content-title"><strong><span><span>The New State Machine</span></span></strong></h2><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; One of the first planned additions to the game was a boss fight against an immobile dragon with multiple attacks. Even before any of its attacks were determined, I recognized that the then-current setup would be insufficient. We needed an implementation that wouldn't need periodic rewrites to account for new state types being added and could handle both single- and multi-attack enemies. To keep track of the machine's states, I settled on using a Dictionary mapping a StateName onto a List of States with that StateName:</span></span><br></div><div id="670800800927569376"><div><div id="element-0ecf84fd-62bb-433d-a5f5-f8b959eab3a7" data-platform-element-id="270170748587580171-1.3.3" class="platform-element-contents"><div class="code-editor--dark"><div class="header"><div class="paragraph">Code Editor</div></div><div class="body-code"><pre class="editor"></pre></div></div></div><div style="clear:both;"></div></div></div><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; This new system both allows for more flexibility when assigning states and moves all data relevant to a state into that script instead of being passed down through a series of functions. Additionally, the designer can specify which state the enemy enters first.</span></span><br></div><h2 class="wsite-content-title"><span><span>State Example: Hunt</span></span><br></h2><div class="paragraph"><span><span>The Hunt state was developed after the game initially released, when we received feedback from players that the Shield Rat enemy was too easy to run past. Its original attack calculated a trajectory towards the player's position at the time the state was entered and moved the enemy towards and through it, but if the player had moved from that point there was little to no risk of being hit if they weren't actively fighting the Shield Rat already.</span></span><br></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0px;margin-right:0px;text-align:center"><a><img src="http://www.scottdagenprogrammer.com/uploads/1/5/4/6/154638382/shield-rat-2_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%">A Shield Rat in its natural habitat.</div></div></div><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; The new Hunt state takes a very different approach, continuously seeking the player at a speed moderately above the player's speed. It also takes the player's velocity into account and attempts to intercept, encouraging the player to adopt a wait-and-dodge pattern until the attack ends, before retaliating with attacks of their own.</span></span><br><span><span>&nbsp;&nbsp;&nbsp; The math behind the intercept algorithm has two parts: finding the interception point and path smoothing. To calculate where the Shield Rat could intercept the player's movement, I followed the steps provided in a</span> <a href="https://youtu.be/6OkhjWUIUf0" target="_blank"><u><span>2017 GDC talk by Chris Stark of Robot Entertainment</span></u></a> <span>(</span><a href="https://www.gdcvault.com/play/1024679/Math-for-Game-Programmers-Predictable" target="_blank"><u><span>slides</span></u></a><span>). This algorithm finds up to two overlap points between a circle representing all of the AI agent's possible positions at time</span> <em><span>t</span></em> <span>and a vector from the player's position representing their new position at time</span> <em><span>t</span></em> <span>moving at their current velocity.</span></span></div><div id="542198923237898729"><div><div id="element-1a4ac014-b8db-46ca-bdf3-aeb7dd763879" data-platform-element-id="270170748587580171-1.3.3" class="platform-element-contents"><div class="code-editor--dark"><div class="header"><div class="paragraph"></div></div><div class="body-code"><pre class="editor"></pre></div></div></div><div style="clear:both;"></div></div></div><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; The target velocity was given a weight modifier so the Shield Rat's accuracy could be tweaked. A lower value (less than one) would make it seek closer to the player's current position, and a higher value (above one) would make the rat more prone to overshooting.</span></span><br><span><span>&nbsp;&nbsp;&nbsp; However, this algorithm wasn't sufficient for calculating the rat's movement destination, as the rat's target could jump wildly if the player reversed direction. I needed to interpolate between the rat's current target at a given frame and the interception point, allowing it to smoothly shift to the new location over a few frames. This led to an interesting problem though: a linear interpolation between points</span> <em><span>a</span></em> <span>and</span> <em><span>b</span></em> <span>by a changing value</span> <em><span>u</span></em> <span>requires</span> <em><span>a</span></em> <span>and</span> <em><span>b</span></em> <span>to be static, known points. Because</span> <em><span>a</span></em> <span>(the enemy's position) and</span> <em><span>b</span></em> <span>(the interception point) change every frame, a different approach needed to be taken. The easy approach was to reassign</span> <em><span>a</span></em><span>'s value each frame:</span></span></div><div><div id="337545699819635600" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">\[ oldPos = lerpPos \\ lerpPos = Lerp(oldPos, goalPos, u) \]</div></div><div class="paragraph"><span><span>&nbsp;&nbsp;&nbsp; This presented another question: what value should</span> <em><span>u</span></em> <span>have? This is no longer a proper linear interpolation, so it's hard to estimate how long the interpolation would take to reach a desirable threshold given a value for</span> <em><span>u</span></em><span>. 0.5 would halve the distance each time, for example, and 0.3 would shorten the distance by a lower amount, but I couldn't reasonably ask what value to pick if I wanted the interpolation to take 1 second. To try to find a solution, I started writing out the results of iterative linear interpolation:</span></span><br></div><div><div id="910528992787225011" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><!-- https://stackoverflow.com/questions/11296415/how-to-left-align-mathjax-elements -->$$ \begin{aligned} Lerp(a,b,u) & = a(1-u)+b*u \\ Lerp(a,b,u) & = a-au+bu \\ Lerp(Lerp(a,b,u),b,u) & = Lerp(a-au+bu,b,u) \\ Lerp(a-au+bu,b,u) & = au^2 - 2au + a - bu^2 + 2bu \end{aligned} $$</div></div><div><div id="128248172424985324" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">Future iterations will be denoted as \( Lerp^n(a,b,u) \).<br><br>I continued out to \( Lerp^3 \) and \(Lerp^4 \), looking for a pattern:</div></div><div><div id="555199846966298708" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">$$ \begin{aligned} Lerp^3(a,b,u) & = -au^3+3au^2-3au+a+bu^3-3bu^2+3bu \\ & = a(-u^3+3u^2-3u+1)-b(-u^3+3u^2-3u) \\ \\ Lerp^4(a,b,u) & = au^4-4au^3+6au^2-4au+a-bu^4+4bu^3-6bu^2+4bu \\ & = a(u^4-4u^3+6u^2-4u+1)-b(u^4-4u^3+6u^2-4u) \end{aligned} $$</div></div><div class="paragraph"><span><span>At this point, I realized that this looked very close to being two binomial expansions added to each other. After a little rearranging, I reached this set of equations:</span></span><br></div><div><div id="499601312894279187" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml">$$ \begin{aligned} Lerp^3(a,b,u) & = a(-u^3+3u^2-3u+1)-b(-u^3+3u^2-3u) +b(+1-1) \\ & = a(1-u)^3 - b(1-u)^3+b \\ \\ Lerp^4(a,b,u) & = a(u^4-4u^3+6u^2-4u+1)-b(u^4-4u^3+6u^2-4u) +b(+1-1) \\ & = a(1-u)^4 - b(1-u)^4+b \\ \\ \end{aligned} $$ \[ Lerp^n(a,b,u) = a(1-u)^n - b(1-u)^n+b \]</div></div><div><div id="238186446204197324" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><p>&emsp;I now had an equation that would give me the output of an interpolation between \(a\) and \(b\) with an interpolation value of \(u\), iterated \(n\) times. From there, I wanted to create an equation that would yield a value of u that will give me a specified output for \(Lerp^n\). \(n\) was easily calculable, as it's equal to the framerate, rounded up, times the desired duration (in this case, \(1\)<b>/Time.fixedDeltaTime</b> for the framerate and 1 second for the duration yields \(n = 50\)). The \(u\) equation is as follows:</p></div></div><div><div id="374589559909306966" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><table style="margin-left:auto;margin-right:auto;border-spacing:30px;"><tr><td>$$ \begin{eqnarray} g = a(1-u)^n - b(1-u)^n + b \\ g = (a-b)(1-u)^n + b \\ g - b = (a-b)(1-u)^n \\ \frac{(g - b)}{(a-b)} = (1-u)^n \end{eqnarray} $$</td><td>$$ \begin{eqnarray} \sqrt[n]{\frac{(g - b)}{(a-b)}}=1-u \\ -\sqrt[n]{\frac{(g - b)}{(a-b)}}=u-1 \\ 1 - \sqrt[n]{\frac{(g - b)}{(a-b)}} = u \end{eqnarray} $$</td></tr></table></div></div><div><div id="111142145907237368" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><p>&emsp;From there, it's pretty simple to plug in values. \(n\) is defined above, \(a\) is our current location, \(b\) is the intercept location, and \(g\) is 95% of the way between \(a\) and \(b\). In practice, \(g\) is calculated using the value \(0.95\) for a float-based iterated interpolation between \(0\) and \(1\). Additionally, g is always between \(a\) and \(b\), so if \(a-b\) is negative, \(g-b\) is also negative, and the same holds for if \(a-b\) is positive. This means that the value under the root is always positive, guaranteeing that even values of \(n\) yield a real result.<br>&emsp;These two functions are repeatedly used in sequence to ensure that the rat is always moving towards where it thinks the player will be can't immediately react to player actions.<br>&emsp;The Hunt state is one of over twenty AI-related scripts that I either overhauled and optimized or implemented from scratch. Each one posed its own challenges, including managing audio cues, changing material variables to animate a projectile, and knocking back both enemies and the player, and they all required me to either expand my knowledge or apply it in ways I hadn't before.</p></div></div>]]></content:encoded></item><item><title><![CDATA[GPR-450: Advanced Animation Programming]]></title><link><![CDATA[http://www.scottdagenprogrammer.com/blog/advanced-animation-programming]]></link><comments><![CDATA[http://www.scottdagenprogrammer.com/blog/advanced-animation-programming#comments]]></comments><pubDate>Fri, 01 Jan 2021 05:00:00 GMT</pubDate><category><![CDATA[Classwork]]></category><guid isPermaLink="false">http://www.scottdagenprogrammer.com/blog/advanced-animation-programming</guid><description><![CDATA[After taking Intermediate Graphics & Animation Programming, the logical next step was Advanced Animation Programming, which I took the following semester. Unlike the previous class, we were given the option to work in animal3D (C), Unity (C#), Unreal (C++), or any other framework that would allow us to complete our assignments. Cameron Schneider and I decided to continue to use animal3D, as we were already familiar with its quirks.The class covered many areas of animation programming including p [...] ]]></description><content:encoded><![CDATA[<div class="paragraph" style="text-align:left;">After taking <a href="http://www.scottdagenprogrammer.com/blog/intermediate-graphics-animation" target="_blank">Intermediate Graphics & Animation Programming</a>, the logical next step was Advanced Animation Programming, which I took the following semester. Unlike the previous class, we were given the option to work in animal3D (C), Unity (C#), Unreal (C++), or any other framework that would allow us to complete our assignments. <a href="https://cameron-schneider.wixsite.com/cameronschneider" target="_blank">Cameron Schneider</a> and I decided to continue to use animal3D, as we were already familiar with its quirks.<br><br>The class covered many areas of animation programming including pose-to-pose animation, forward and inverse kinematics (FK and IK), blend trees, and having animations respond to user input. There wasn't much range for us to customize our work until the final project, where we were tasked with creating something that incorporated each of the main topics of the course. Cameron and I decided to implement character control for a wolf walking up a slope.<br><br>We started by obtaining a <a href="https://free3d.com/3d-model/wolf-rigged-and-game-ready-42808.html" target="_blank">skeleton and animation clips for a wolf</a>, the latter of which we converted into an <a href="https://research.cs.wisc.edu/graphics/Courses/cs-838-1999/Jeff/HTR.html" target="_blank">HTR file</a> and loaded into animal3D. From there, Cameron implemented raycasting so we could detect whether the wolf was on a slope, and I revamped our blend tree so it would be easier to construct and modify. I also set up the code that would actually run the raycasts, which we used to determine the positions of the IK constraints.</div><div><div id="914855623524232526" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><div style="text-align: center;">Wolf Skeleton Walking Up a Slope<br><iframe src="https://drive.google.com/file/d/1v0qAeOAWBwh7BKoF84j6JpmG16bggnNj/preview" width="640" height="480"></iframe></div></div></div><div><!--BLOG_SUMMARY_END--></div><div class="paragraph" style="text-align:left;"><br>The large orange spheres are positioned one unit below the sources of the raycasts, and the cyan dots contacting the floor are the positions that the raycasts are hitting. These raycasts are bidirectional which allows us to determine whether a paw is above or below the floor. If the raycast source is below the floor, we move the IK constraint for that leg upwards until it's touching the floor, at which point we apply the IK solver.<br><br>The wolf bobs up and down as it goes up the ramp because its height is currently determined by a comparison between the distance from the shoulder to the paw in the wolf's idle pose (a stored constant) and the current distance between the shoulder and the floor. If the shoulder-to-floor distance is greater than the expected length, the wolf moves down to compensate, and if the shoulder-to-floor distance is less than 85% of the constant, the opposite occurs. This implementation was necessary because animal3D, being a graphics framework, lacks a built-in collision resolution system.<br></div><div><div class="wsite-image wsite-image-border-none" style="padding-top:10px;padding-bottom:10px;margin-left:0;margin-right:0;text-align:center"><a><img src="http://www.scottdagenprogrammer.com/uploads/1/5/4/6/154638382/blend-tree-construction_orig.png" alt="Picture" style="width:auto;max-width:100%"></a><div style="display:block;font-size:90%"></div></div></div><div class="paragraph" style="text-align:left;">The code shown above demonstrates the various blend tree construction functions I implemented. When a blend tree is created, the user can choose to pre-allocate blend nodes or handle their allocation elsewhere and store it at a later point. Once the tree is constructed, each node is created by passing in the address of the node and an enum indicating its purpose. The following functions assign whatever data is necessary for that blend node to work properly, whether those are clip controllers or pointers to interpolation values.<br><br>The next function sets up parent-child relationships between nodes. In this situation, the nodes at index 0, 1, and 2 are all being set as children of the node at index 3, and the index 3 node is set as the root. Finally, there's a check to ensure that the tree was assembled correctly and the inputs and outputs of each blend node are configured.&#8203;<br><br>I implemented the blend tree in this manner so that a GUI could hypothetically be constructed to edit trees at runtime. A node could easily be added, deleted or edited using other utility functions, and the tree could then be rebuilt with a3hierarchyBlendTreeBindStates() without needing to restart the program.<br></div>]]></content:encoded></item><item><title><![CDATA[GPR-300: Intermediate Graphics & Animation]]></title><link><![CDATA[http://www.scottdagenprogrammer.com/blog/intermediate-graphics-animation]]></link><comments><![CDATA[http://www.scottdagenprogrammer.com/blog/intermediate-graphics-animation#comments]]></comments><pubDate>Mon, 01 Jun 2020 04:00:00 GMT</pubDate><category><![CDATA[Classwork]]></category><guid isPermaLink="false">http://www.scottdagenprogrammer.com/blog/intermediate-graphics-animation</guid><description><![CDATA[From January to May 2020, I took GPR-300: Intermediate Graphics & Animation Programming, which was a deep dive into both the CPU and GPU sides of graphics programming utilizing animal3D, a graphics framework created by Professor Daniel Buckstein. Throughout the course's duration, I and my teammate Cameron Schneider learned how to implement various shading algorithms and expanded upon the rendering pipeline provided by animal3D.In addition to shading algorithms (such as Phong, Lambert, Cel, and G [...] ]]></description><content:encoded><![CDATA[<div class="paragraph">From January to May 2020, I took GPR-300: Intermediate Graphics & Animation Programming, which was a deep dive into both the CPU and GPU sides of graphics programming utilizing animal3D, a graphics framework created by Professor <a href="https://github.com/dbuckstein" target="_blank">Daniel Buckstein</a>. Throughout the course's duration, I and my teammate <a href="https://cameron-schneider.wixsite.com/cameronschneider" target="_blank">Cameron Schneider</a> learned how to implement various shading algorithms and expanded upon the rendering pipeline provided by animal3D.<br><br>In addition to shading algorithms (such as Phong, Lambert, Cel, and Gooch), post-processing effects (Bloom), and pipeline techniques (framebuffers and deferred lighting), we were also asked to create projects exploring some area of graphics that we were interested in. For our midterm, Cameron and I decided to implement both <a href="https://en.wikipedia.org/wiki/Screen_space_ambient_occlusion" target="_blank">screen-space ambient occlusion</a> and a crosshatch/pencil shader.<br></div><div><div id="825776117311602610" align="left" style="width: 100%; overflow-y: hidden;" class="wcustomhtml"><div style="text-align: center;">Midterm Presentation<br><iframe src="https://drive.google.com/file/d/1xPhJBTEBjm3nRqZSN-HFAYQxJutyVz6y/preview" width="640" height="480"></iframe></div></div></div><div class="paragraph"><br>I was responsible for the majority of the C code for the crosshatch pipeline as well as the actual crosshatch shader. I also created the additional framebuffers that were required for the multiple SSAO passes and added onto the text-based UI so it was easier to tell what was going on. Cameron implemented the SSAO shader and the C code for its rendering pipeline, including generating any uniform variables that were required. He also found a way to pack the six crosshatch textures into only two images, which saves a lot of space both on the disk and in RAM as fewer images need to be passed to the GPU.<br></div><div><div style="height:0px;overflow:hidden"></div><div id='142416542614096737-slideshow'></div><div style="height:20px;overflow:hidden"></div></div>]]></content:encoded></item></channel></rss>