STUPID ZOMBIES: EXTERMINATOR
Stupid Zombies: Exterminator is a casual top-down single-stick shooter, where players must exterminate hordes of zombies in different locales. Players progress through the map to beat levels and unlock and upgrade weapons and hero characters. All level layouts are authored, while zombie spawning in each level is randomized.
SUMMARY OF RESPONSIBILITIES
Prototyped initial gameplay concept
Collaborated on design concepts for zombies
Managed all gameplay and UI programming
Coordinated with art team to develop the visual look of the game through custom lighting and shading
DETAILED BREAKDOWN
For the past year I’ve developed Stupid Zombies: Exterminator (SZ:E) at GameResort, where as primary Game Developer, I’ve been responsible for almost all code and all of the UI setup. I primarily wrote the project using Unity’s ECS architecture, and using the Universal Rendering Pipeline.
SZ:E has been available in soft launch for several months, earning a 4.5 star rating on the App Store and a 4.3 star rating on the Google Play Store. Although you can never really say when a live-service game is truly complete, I’ve spent enough time working on the project that a post-mortem highlighting key design decisions and features now feels appropriate.
This post assumes basic familiarity with Unity’s ECS implementation and jobs. If you’re curious about the core concepts, Unity offers a thorough primer with some fun cartoons on the package page.
NAVIGATION
Navigation was one of the first systems that I developed for this project. I implemented the navigation using a basic flow field pathfinding algorithm. Flow field pathfinding works essentially the same as A* but in reverse. Instead of a specific navigation agent plotting a path towards its target, an algorithm starts at the target and works its way outward, assigning a direction to the target for every navigation space on the grid; creating a flow field. There is a fixed cost for updating the flow field on every frame, but after this, navigation agents only have to look at their current space on the flow field to see what direction they should be moving. This makes it an ideal navigation system for a game like SZ:E, with only one player and potentially many enemies.
In SZ:E, each level is authored as a prefab, and during authoring, a grid is generated to indicate which spaces are blocked or open. Because enemies randomly spawn with each level, this grid also doubles as a collection of possible zombie spawn locations.
Anything that might need to navigate the grid is given a navigation component:
public struct NavigationAgent : IComponentData { public float2 navigationDirection; public bool onTarget; }
Each space in the navigation grid is represented by a bitmask that indicates which directions next to it are open. This is the most efficient system, since the blocked spaces don’t change at runtime, and spaces are either blocked or unblocked, with all unblocked spaces having equal traversal cost.
[Flags] public enum GridDirectionFlags : byte { Up = 1 << 0, Down = 1 << 1, Left = 1 << 2, Right = 1 << 3, UpLeft = 1 << 4, UpRight = 1 << 5, DownLeft = 1 << 6, DownRight = 1 << 7 }
To prevent zombies from navigating through walls, these open directions are pre-processed with the level to exclude diagonals if either of the two adjacent spaces are blocked.
Prior to running AI logic, the flow field is recalculated for each frame, and each navigation agent is updated with its current intended navigation direction. The first step of this requires generating a cost for each space; the grid for each level is small enough that each space’s cost is represented by a single byte. First, a job is run on each space in parallel to initialize it to the maximum cost value of 255. Next, a second job starts at the space of the target and works outward, assigning a cost to each space that isn’t blocked.
private struct AssignNavigationCostsJob : IJob { [ReadOnly] public NativeArray<GridDirectionFlags> openSpaceDirections; public NativeArray<byte> spaceCosts; public GridArea dimensions; public ushort startSpace; public void Execute() { NativeQueue<ushort> pendingSpaces = new NativeQueue<ushort>(Allocator.Temp); pendingSpaces.Enqueue(startSpace); while (pendingSpaces.Count > 0) { ushort space = pendingSpaces.Dequeue(); GridDirectionFlags openDirections = openSpaceDirections[space]; byte neighborCost = (byte)math.min(spaceCosts[space] + 1, byte.MaxValue); if (CanGoDirection(openDirections, GridDirectionFlags.Up)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceUp(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.Down)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceDown(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.Left)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceLeft(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.Right)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceRight(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.UpLeft)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceUpLeft(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.UpRight)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceUpRight(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.DownLeft)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceDownLeft(space, dimensions), neighborCost); } if (CanGoDirection(openDirections, GridDirectionFlags.DownRight)) { ProcessNeighbor(pendingSpaces, GetNeighborSpaceDownRight(space, dimensions), neighborCost); } } } private void ProcessNeighbor(NativeQueue<ushort> pendingSpaces, ushort neighborSpace, byte cost) { if (spaceCosts[neighborSpace] == byte.MaxValue) { spaceCosts[neighborSpace] = cost; pendingSpaces.Enqueue(neighborSpace); } else if (cost < spaceCosts[neighborSpace]) { spaceCosts[neighborSpace] = cost; } } }
After this, a second job runs in parallel for all spaces to calculate the best navigation direction. This direction is stored as a byte, representing one of eight possible directions from the space.
private struct AssignNavigationDirectionsJob : IJobParallelFor { [ReadOnly] public NativeArray<GridDirectionFlags> openSpaceDirections; [ReadOnly] public NativeArray<byte> spaceCosts; public NativeArray<byte> spaceDirectionIndices; public GridArea dimensions; public void Execute(int index) { byte lowestCost = spaceCosts[index]; if (lowestCost == 0) { spaceDirectionIndices[index] = 0; return; } GridDirectionFlags openDirections = openSpaceDirections[index]; ushort space = (ushort)index; byte directionIndex = 0; if (CanGoDirection(openDirections, GridDirectionFlags.Up)) { UpdateDirection(GetNeighborSpaceUp(space, dimensions), _upDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.Down)) { UpdateDirection(GetNeighborSpaceDown(space, dimensions), _downDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.Left)) { UpdateDirection(GetNeighborSpaceLeft(space, dimensions), _leftDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.Right)) { UpdateDirection(GetNeighborSpaceRight(space, dimensions), _rightDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.UpLeft)) { UpdateDirection(GetNeighborSpaceUpLeft(space, dimensions), _upLeftDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.UpRight)) { UpdateDirection(GetNeighborSpaceUpRight(space, dimensions), _upRightDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.DownLeft)) { UpdateDirection(GetNeighborSpaceDownLeft(space, dimensions), _downLeftDirectionIndex, ref lowestCost, ref directionIndex); } if (CanGoDirection(openDirections, GridDirectionFlags.DownRight)) { UpdateDirection(GetNeighborSpaceDownRight(space, dimensions), _downRightDirectionIndex, ref lowestCost, ref directionIndex); } spaceDirectionIndices[index] = directionIndex; } private void UpdateDirection(ushort neighborSpace, byte neighborDirectionIndex, ref byte lowestCost, ref byte directionIndex) { byte neighborCost = spaceCosts[neighborSpace]; if (neighborCost < lowestCost) { lowestCost = neighborCost; directionIndex = neighborDirectionIndex; } } }
Finally, each navigation agent uses bilinear interpolation to blend between the directions of the four nearest grid spaces, which helps keep movement smooth at the borders between spaces. If one of the neighboring spaces is blocked, then the direction towards the agent is used to help push agents away from walls. This is calculated in parallel for each navigation agent and assigned to the navigation agent component for later use by the AI systems.
Animation
One of the main appeals of ECS is the ability to have a high volume of entities simultaneously simulating on the screen. Unfortunately, when the project began, Unity did not support skinned mesh animation for entities in any capacity (this feature is supported to some extent now, though primarily for Hybrid Renderer V2, which is not yet available for use on mobile. For this project, we used Hybrid Renderer V1. You can read more about the Hybrid Renderer package and the differences between the two versions here).
With this limitation, we had a couple of options:
Use ECS to update the zombies’ navigation and AI, but have each represented by a GameObject in the scene. Though this would have worked, in many ways it would have defeated the purpose of using ECS altogether.
Use a shader to skin the mesh on the GPU.
I chose the later option primarily for performance reasons, as interfacing between ECS and GameObjects isn’t efficient for large amounts of entities. I did, however, decide to use the GameObject approach for the hero character, because we required animation blending and IK for the arms to hold the weapon. Since there is only one hero character on screen at a time, the performance cost here was negligible.
Unity utilized this technique for their Nordeus demo showcasing ECS, and Joachim Ante created a GitHub project that isolated this specific functionality.
I used the relevant code from this project, cleaning it up and turning it into an editor asset that could be used to bake animations.
The editor asset tool packs all of the animations from the hero character list into three ARGBFloat textures at a given framerate. There are three textures generated for a skinned mesh, each representing a row in every bone’s transformation matrix at each animation frame. Each pixel row of a texture represents a bone in the original skinned mesh, and each pixel in a row represents a portion of that bone’s transform at a specific animation frame. Animations are packed end-to-end in each row. This enables all animations to be displayed by a start and end pixel horizontally in the texture.
The tool also outputs a mesh with information about the bones and weighting assigned to the vertex UVs. There are a maximum of two bones per vertex. The first UV channel is the original mesh UVs for texturing. The second UV channel contains the first bone index in the U channel and the second bone index in the V channel. The third UV channel contains the first bone’s weight in the U channel and the second bone’s weight in the V channel.
BoneWeight[] boneWeights = originalMesh.boneWeights; Vector2[] boneIndexUVs = new Vector2[boneWeights.Length]; Vector2[] boneWeightUVs = new Vector2[boneWeights.Length]; float boneIndexTextureScale = 1.0f / boneWeights.Length; for (int i = 0; i < boneWeights.Length; i++) { BoneWeight weight = boneWeights[i]; boneIndexUVs[i] = new Vector2(weight.boneIndex0 + 0.5f, weight.boneIndex1 + 0.5f) * boneIndexTextureScale; boneWeightUVs[i] = new Vector2(weight.weight0, weight.weight1) / Mathf.Max(weight.weight0 + weight.weight1, Mathf.Epsilon); }
The animation frame data also needed to be passed per-zombie to the shader. Fortunately, the hybrid renderer provided a streamlined way to do this via a component by using the MaterialProperty attribute.
[MaterialProperty("_AnimationFrameData", MaterialPropertyFormat.Float4)] public struct GPUSkinnedMeshAnimationFrameData : IComponentData { public float4 value; }
For each zombie, I ran a system that updated its current animation state and set the animation frame data on the material property. The current and next frame texture coordinates were stored in the x and y respectively, and the normalized interpolation between them was stored in the z.
float interpolatedFrame = animationTime.value * animation.framerate; float frame0, frame1; if(animation.selectedClip.loop) { interpolatedFrame = Loop(interpolatedFrame, animation.selectedClip.frameDuration); frame0 = math.floor(interpolatedFrame); frame1 = math.floor(Loop(interpolatedFrame + 1.0f, animation.selectedClip.frameDuration)); } else { float maxFrame = animation.selectedClip.frameDuration - 0.5f; interpolatedFrame = math.min(interpolatedFrame, maxFrame); frame0 = math.floor(interpolatedFrame); frame1 = math.min(frame0 + 1.0f, maxFrame); } frameData.value = new float4( GetFrameTexcoord(frame0, animation.texcoordStart, animation.texcoordMultiplier), //x GetFrameTexcoord(frame1, animation.texcoordStart, animation.texcoordMultiplier), //y interpolatedFrame - frame0, //z 0.0f); //w
Finally, in the vertex shader, I sampled the textures using the animation data and vertex UVs. I used this to recreate the bone matrix at that frame.
inline float4x4 CreateMatrix(float texturePosition, float boneId) { float4 row0 = SAMPLE_TEXTURE2D_LOD(_AnimationTexture0, sampler_AnimationTexture0, float2(texturePosition, boneId), 0); float4 row1 = SAMPLE_TEXTURE2D_LOD(_AnimationTexture1, sampler_AnimationTexture1, float2(texturePosition, boneId), 0); float4 row2 = SAMPLE_TEXTURE2D_LOD(_AnimationTexture2, sampler_AnimationTexture2, float2(texturePosition, boneId), 0); float4x4 reconstructedMatrix = float4x4(row0, row1, row2, float4(0, 0, 0, 1)); return reconstructedMatrix; } inline float4x4 CalculateSkinMatrix(float2 boneIds, float2 boneInfluences) { float4x4 bone0Frame0Matrix = CreateMatrix(_AnimationFrameData.x, boneIds.x); float4x4 bone0Frame1Matrix = CreateMatrix(_AnimationFrameData.y, boneIds.x); float4x4 bone0Matrix = bone0Frame0Matrix * (1.0 - _AnimationFrameData.z) + bone0Frame1Matrix * _AnimationFrameData.z; float4x4 bone1Frame0Matrix = CreateMatrix(_AnimationFrameData.x, boneIds.y); float4x4 bone1Frame1Matrix = CreateMatrix(_AnimationFrameData.y, boneIds.y); float4x4 bone1Matrix = bone1Frame0Matrix * (1.0 - _AnimationFrameData.z) + bone1Frame1Matrix * _AnimationFrameData.z; return bone0Matrix * boneInfluences.x + bone1Matrix * boneInfluences.y; }
This is also how I created the zombie death animations. Unity features a lesser-known class, called GameObjectRecorder, that streamlines running physics simulations in the editor and recording them to an animation file. Our artist created a mesh of each zombie broken into pieces, as if the characters were shattered. I then wrote a tool to assign each piece a collider and attach the pieces together using fixed joints. Next, I ran a scene in the editor and “exploded” the zombie with a few parameters. I did this multiple times using different parameters, until I achieved the desired result and saved the resulting animation file. Once the animation was captured, I ran this file with the shattered mesh through the GPU skinning pipeline to create an effect that could be played back at runtime.
The primary drawback of the current animation system is the inability to blend between animations. This would require passing more data to the shader from the two animations being blended, recreating four bone matrices instead of two. For performance purposes, I did not attempt to do this for SZ:E, but I would like to experiment with blending more on future projects.
The other drawback of the animation system is that it doesn’t work on older Android devices. I believe this may be due to a lack of support for the ARGBFloat format. If this is the case, fixing it may be possible by using lower precision textures normalized by some factor, then re-expanding the values in the shader. Though this currently isn’t a high priority, it could be useful for future updates.
BLOODWORK
No zombie game is complete without blood, and during development, the team wanted to guarantee that SZ:E would deliver on gore. When the zombies were shot, the desired effect was for blood splatters to appear on nearby surfaces, so I needed to develop a system that would support this. While decals are the easiest method, avoiding overlapping transparency is crucial for performance on mobile; to support a lot of blood, a different system would be necessary.
Looking to Splatoon for inspiration, I started researching how I might be able to create blood splatters that were similar to the game’s signature level-painting effect. To the best of my knowledge, the developers of Splatoon have never disclosed their method publicly, but after digging around I found the most plausible implementation to be this one. This version of the effect was thorough and featured different splat icons and normal mapping -- two elements I didn’t need for our game. Instead, I implemented a simplified version of this, both tailored to our desired effect and cost-effective.
The first step was to generate lightmap UVs for each level. Unity will generate lightmap UVs for an individual mesh via import settings, but to lay out these UVs in a lightmap the objects must be part of a scene. This, unfortunately, was not possible because our levels were designed as prefabs, so that they could be easily converted to entity representations.
Laying out lightmap UVs is essentially the same as sprite packing, and after some research, I found an algorithm that I liked to achieve that. This was relatively easy to port to Unity:
public static class RectanglePackingUtiliity { private struct Space { public float x; public float y; public float width; public float height; } private struct Rectangle { public float2 size; public int originalIndex; } private struct RectangleComparerArea : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign((rectangle1.size.x * rectangle1.size.y) - (rectangle0.size.x * rectangle0.size.y)); } } private struct RectangleComparerPerimeter : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign((rectangle1.size.x + rectangle1.size.y) - (rectangle0.size.x + rectangle0.size.y)); } } private struct RectangleComparerBiggerSide : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign(math.max(rectangle1.size.x, rectangle1.size.y) - math.max(rectangle0.size.x, rectangle0.size.y)); } } private struct RectangleComparerWidth : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign(rectangle1.size.x - rectangle0.size.x); } } private struct RectangleComparerHeight : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign(rectangle1.size.y - rectangle0.size.y); } } private struct RectangleComparerPathalogical : IComparer<Rectangle> { public int Compare(Rectangle rectangle0, Rectangle rectangle1) { return (int)math.sign((math.max(rectangle1.size.x, rectangle1.size.y) / math.min(rectangle1.size.x, rectangle1.size.y) * rectangle1.size.x * rectangle1.size.y) - (math.max(rectangle0.size.x, rectangle0.size.y) / math.min(rectangle0.size.x, rectangle0.size.y) * rectangle0.size.x * rectangle0.size.y)); } } [BurstCompile] public static NativeArray<float2> PackRectangles(NativeArray<float2> rectangleSizes, Allocator allocator, int maxIterations, out float packedSize) { float2 totalSize = float2.zero; float2 minSize = new float2(float.MaxValue, float.MaxValue); for (int i = 0; i < rectangleSizes.Length; i++) { float2 rectangleSize = rectangleSizes[i]; totalSize += rectangleSize; minSize = math.min(rectangleSize, minSize); } NativeArray<Rectangle> rectangles = new NativeArray<Rectangle>(rectangleSizes.Length, Allocator.Temp); for (int i = 0; i < rectangleSizes.Length; i++) { rectangles[i] = new Rectangle() { size = rectangleSizes[i], originalIndex = i }; } NativeArray<Rectangle> sortedRectanglesArea = GetSortedRectangles<RectangleComparerArea>(rectangles); NativeArray<Rectangle> sortedRectanglesPerimeter = GetSortedRectangles<RectangleComparerPerimeter>(rectangles); NativeArray<Rectangle> sortedRectanglesBiggerSide = GetSortedRectangles<RectangleComparerBiggerSide>(rectangles); NativeArray<Rectangle> sortedRectanglesWidth = GetSortedRectangles<RectangleComparerWidth>(rectangles); NativeArray<Rectangle> sortedRectanglesHeight = GetSortedRectangles<RectangleComparerHeight>(rectangles); NativeArray<Rectangle> sortedRectanglesPathalogical = GetSortedRectangles<RectangleComparerPathalogical>(rectangles); NativeArray<float2> offsets = new NativeArray<float2>(rectangleSizes.Length, allocator); NativeArray<float2> tmpOffsets = new NativeArray<float2>(rectangleSizes.Length, Allocator.Temp); float size = packedSize = math.max(totalSize.x, totalSize.y); float step = size; float minStep = math.max(minSize.x, minSize.y) * 0.5f; for (int i = 0; i < maxIterations; i++) { step *= 0.5f; bool packedRectangles = PackRectangles(sortedRectanglesArea, size, minSize, ref tmpOffsets) || PackRectangles(sortedRectanglesPerimeter, size, minSize, ref tmpOffsets) || PackRectangles(sortedRectanglesBiggerSide, size, minSize, ref tmpOffsets) || PackRectangles(sortedRectanglesWidth, size, minSize, ref tmpOffsets) || PackRectangles(sortedRectanglesHeight, size, minSize, ref tmpOffsets) || PackRectangles(sortedRectanglesPathalogical, size, minSize, ref tmpOffsets); if (packedRectangles) { offsets.CopyFrom(tmpOffsets); packedSize = size; size -= step; } else { size += step; } if (step <= minStep) { break; } } return offsets; } [BurstCompile] private static NativeArray<Rectangle> GetSortedRectangles<T>(NativeArray<Rectangle> rectangles) where T : struct, IComparer<Rectangle> { NativeArray<Rectangle> sortedRectangles = new NativeArray<Rectangle>(rectangles.Length, Allocator.Temp); rectangles.CopyTo(sortedRectangles); sortedRectangles.Sort(new T()); return sortedRectangles; } [BurstCompile] private static bool PackRectangles(NativeArray<Rectangle> rectangles, float size, float2 minSize, ref NativeArray<float2> offsets) { NativeList<Space> spaces = new NativeList<Space>(rectangles.Length + 2, Allocator.Temp); spaces.Add(new Space() { x = 0.0f, y = 0.0f, width = size, height = size }); for (int i = 0; i < rectangles.Length; i++) { Rectangle rectangle = rectangles[i]; float2 rectangleSize = rectangle.size; int selectedSpaceIndex = -1; for (int j = spaces.Length - 1; j >= 0; j--) { Space space = spaces[j]; if (space.width >= rectangleSize.x && space.height >= rectangleSize.y) { selectedSpaceIndex = j; break; } } //Couldn't find a fitting space if (selectedSpaceIndex < 0) { return false; } Space selectedSpace = spaces[selectedSpaceIndex]; spaces.RemoveAtSwapBack(selectedSpaceIndex); offsets[rectangle.originalIndex] = new float2(selectedSpace.x, selectedSpace.y); float2 difference = new float2(selectedSpace.width - rectangleSize.x, selectedSpace.height - rectangleSize.y); if (difference.x < minSize.x) { if (difference.y > minSize.y) { spaces.Add(new Space() { x = selectedSpace.x, y = selectedSpace.y + rectangleSize.y, width = selectedSpace.width, height = difference.y }); } continue; } if (difference.y < minSize.y) { spaces.Add(new Space() { x = selectedSpace.x + rectangleSize.x, y = selectedSpace.y, width = difference.x, height = selectedSpace.height }); continue; } if (difference.x < difference.y) { spaces.Add(new Space() { x = selectedSpace.x, y = selectedSpace.y + rectangleSize.y, width = selectedSpace.width, height = difference.y }); spaces.Add(new Space() { x = selectedSpace.x + rectangleSize.x, y = selectedSpace.y, width = difference.x, height = rectangleSize.y }); } else { spaces.Add(new Space() { x = selectedSpace.x + rectangleSize.x, y = selectedSpace.y, width = difference.x, height = selectedSpace.height }); spaces.Add(new Space() { x = selectedSpace.x, y = selectedSpace.y + rectangleSize.y, width = rectangleSize.x, height = difference.y }); } } return true; } }
My implementation returns a list of float2s the same size as the list of input rectangle sizes and a single float for the scale. Each float2 corresponds to the offset of each input rectangle in the resulting lightmap (or bloodmap, in this case). When a lightmap is generated for a room, each mesh is assigned a material property with the lightmap UV offset and scale stored in a float4.
When a zombie is shot, I cast a ray behind it in the direction of the impact, and I spawn multiple blood splatter entities positioned on the ground and wherever the ray hits a wall. A corresponding blood splatter system then processes and deletes these effects towards the end of the frame, prior to the scene being rendered.
public struct BloodSplatter : IComponentData { public float3 position; public float radius; }
In the blood splatter system, I first collect all of the blood splatter entities that were rendered in the frame and upload them to the GPU. The amount of blood splatters that can be rendered per frame is capped to eight for performance purposes.
EntityCommandBuffer commandBuffer = _entityCommandBufferSystem.CreateCommandBuffer(); NativeArray<Entity> splatterEntities = _splatterEntityQuery.ToEntityArray(Allocator.Temp); NativeArray<Vector4> splatters = new NativeArray<Vector4>(_maximumSplatsPerFrame, Allocator.Temp); int count = math.min(splatters.Length, splatterEntities.Length); for (int i = 0; i < count; i++) { Entity splatterEntity = splatterEntities[i]; BloodSplatter splatter = EntityManager.GetComponentData<BloodSplatter>(splatterEntity); splatters[i] = new Vector4(splatter.position.x, splatter.position.y, splatter.position.z, splatter.radius); commandBuffer.DestroyEntity(splatterEntity); }
Next, I render each mesh that has a MaterialPropertySplatMapOffset attached. I use the splat map UV as the clip space position instead of the vertex position, passing the world space position to the fragment.
output.positionCS = float4((_SplatMapOffset.xy + input.texcoord1.xy * _SplatMapOffset.zw) * 2.0 - 1.0, 0.5, 1.0); output.positionWS = TransformObjectToWorld(input.positionOS.xyz);
In the fragment shader, I iterate through all of the blood splatters for that frame, outputting the normalized distance in world space to the splat. If there are more than eight splats in a frame, the remaining splats are processed in the next frame. Theoretically, this could lead to the amount of splatters growing infinitely if more than eight were produced every frame, but in practice, this never happens. The new blood splatters are rendered on top of the previous frame’s results, using double buffering.
float currentSplat = 0.0; for(uint i = 0; i < _TotalSplats; i++) { currentSplat = max(currentSplat, 1.0 - smoothstep(0, _Splats[i].w, distance(input.positionWS, _Splats[i].xyz)); }
To display the splat map, a detail blood height map is tiled over the level using the splat map UVs.
In the shader for the environment objects, I clip the height map based on the blood splatter value at that coordinate, so that the height map is only visible where splatters have occurred. The result is multiplied by a solid blood color set by the artists.
half splat = SAMPLE_TEXTURE2D(_SplatTexture, sampler_SplatTexture, input.splatUV).r; half splatDetail = SAMPLE_TEXTURE2D(_SplatDetailTexture, sampler_SplatDetailTexture, input.splatDetailUV).r; mainColor = lerp(mainColor, _BloodColor, (1.0 - smoothstep(splat - 0.05, splat, splatDetail)) * _BloodColor.a);
The drawback of this approach is that it can create seams in the detail map at the borders of UV tiles, but this is barely visible because of the camera placement and distance. By using this method, we are able to display high frequency blood detail with a relatively low-resolution splat map texture, so the benefits outweigh the drawbacks.
Effects
In addition to skinned mesh animations, ECS also does not support particles yet, so it was necessary to develop a custom particle solution. Fortunately, Unity particle systems provide several functions that made this relatively straightforward.
For each particle that could be spawned in the game, I created a component data to represent all necessary information. For example, here is the explosion particle component data:
public struct ParticleExplosion : IComponentData { public float3 position; }
In the presentation system group, prior to rendering, there is a corresponding update system that references a single Particle System. In the updated method, I collect all of the newly created particle entities and tell the Particle System to emit that number of particles.
NativeArray<ParticleExplosion> explosionData = _explosionQuery.ToComponentDataArray<ParticleExplosion>(Allocator.TempJob); int explosionParticleCount = _explosionSystem.particleCount; _explosionSystem.Emit(explosionData.Length); JobHandle updateParticlesJobHandle = new UpdateParticlesJob() { data = explosionData, offset = explosionParticleCount }.Schedule(_explosionSystem, inputDeps); explosionData.Dispose(updateParticlesJobHandle); _commandBufferSystem.CreateCommandBuffer().DestroyEntity(_explosionQuery);
Unity includes a job interface that you can use to set the particle data, called IJobParticleSystem. I use a job implementing this interface to position all of the newly emitted particles.
[BurstCompile] private struct UpdateParticlesJob : IJobParticleSystem { [ReadOnly] public NativeArray<ParticleExplosion> data; public int offset; public void Execute(ParticleSystemJobData jobData) { NativeArray<float> positionsX = jobData.positions.x; NativeArray<float> positionsY = jobData.positions.y; NativeArray<float> positionsZ = jobData.positions.z; for (int i = 0; i < data.Length; i++) { int particleIndex = offset + i; float3 position = data[i].position; positionsX[particleIndex] = position.x; positionsY[particleIndex] = position.y; positionsZ[particleIndex] = position.z; } } }
This system allows us to spawn as many particles as we need from ECS, using a single Particle System.
Currently, the primary drawback of this approach is that I am required to write a new particle component and update system every time a new particle is added to the game. This isn’t a major burden, because the game doesn’t use too many unique particles. In the future, I aim to leverage the GameObject conversion pipeline to write a general particle authoring component, which could be assigned a particle system. Using a GameObjectConversionSystem, I could then collect all of the referenced particle systems and assign each an ID. This would enable me to use one particle component and one update system for all particles in the game.
UI Animation
For UI animation in SZ:E, I used Unity’s timeline sequencing tool with several custom timeline tracks that I designed to simplify setting up UI tweening. I handled all of the UI animations and setup for this project, but the tool that I developed is simple enough for an artist or someone with zero coding experience to learn and use.
Each menu screen in SZ:E has an AnimateVisibleCanvas component attached, which defines a playable director for both the “show” and “hide” animations. I have found that the majority of non-looping animations in UI are showing or hiding something, so this accounts for most use cases. The component exposes a “visible” variable, which, when set, will either enable the canvas and play the “show” director’s clip, or play the “hide” director’s clip and disable the canvas after it is finished.
public virtual bool visible { get { return _state == State.Showing || _state == State.Shown; } set { if (visible == value) { return; } if(value) { if(_state == State.Hiding) { _state = State.Showing; _hideAnimationDirector.Stop(); } else { _state = State.Showing; } OnStateChanged(State.Showing); SetVisible(true); _showAnimationDirector.time = _showStartTime; _showAnimationDirector.Evaluate(); _showAnimationDirector.Play(); } else { if(_state == State.Showing) { _state = State.Hiding; _showAnimationDirector.Stop(); } else { _state = State.Hiding; } OnStateChanged(State.Hiding); _hideAnimationDirector.time = _hideStartTime; _hideAnimationDirector.Evaluate(); _hideAnimationDirector.Play(); } } }
For directly animating the UI, I have several tween tracks that can be added to the timeline. This includes tweens for Canvas Group Alpha, Graphic Color, RectTransform and Transform.
The tweens expose several options, such as the ability to tween relative to the initial position of the UI element.
By tweening relative to the initial position of the element, it is possible to adjust the layout of UI elements without changing the “show” and “hide” animations. Because this is all managed through the timeline, you can scrub animations in the editor, add complex animation tracks, activate and deactivate objects, and include audio effects.
SZ:E went through many UI modifications over the last year, and having this flexibility was necessary for rapid iteration. Without having the ability to quickly preview and adjust UI animations in the timeline, the process would have been too cumbersome, likely forcing us to abandon UI animation altogether during development.