Momentum

MOMENTUM

What is this?

Momentum is a 3D light puzzle platformer, set in a strange place where the technology is run on the forces of time. In order to sustain it, time has been harvested from several timelines. 


It is up to you to find the time vault and release time back into the world. You will face challenges as you traverse this strange world in your search for all the locks which are keeping the vault closed. With a tool to give and take time, you will be challenged to test your ability in timing and planning.

You can download the game here: https://hannafriden.itch.io/momentum

  • Engine: Unity
  • Language: C#
  • Platform: PC
  • Development Time: 7 weeks
  • Team Size: 12

Pair Programming

My most notable tasks:

Tools Programmer

AI Companion


Miscellaneous task:

Menus

Audio Manager

Dynamic Crosshair

In the beginning of the project we weren’t sure exactly how the game would end up. All three of us programmers agreed to program in an entity-based way and made the base of; the player movement, input system and player tool/weapon base together. In this way, we all felt comfortable working alone on these parts if we had to.

 

The most notable tasks we pair programmed:

Player Base

Player Tool/Weapon Base

Player

Since we didn’t have any animators in our group, we decided to create a game with a first-person controller. We wanted our game designer to have full control over the values, the inputs and everything editable in the inspector.

Our game idea changed a lot during this project. This required us programmers to work in a very modular way, we didn’t want a change of game mechanic to affect the whole work. We wanted our designers to have a lot of freedom while building, so we decided to create events for all actions. In this way they had full control to change or/and add anything to any actions.



public class InputComponent : MonoBehaviour
{
    public InputProfile inputProfile;

    public UnityEvent OnJump;
    public UnityEvent OnInteract;

    [Header("Triggers")]
    public UnityEvent OnPrimaryFire;
    public UnityEvent OnPrimaryFireDown;
    public UnityEvent OnPrimaryFireUp;

    public UnityEvent OnSecondaryFire;
    public UnityEvent OnSecondaryFireUpDown;
    public UnityEvent OnSecondaryFireUp;

    [Header("Axis")]
    public UnityEventVector2 OnLookAxis;
    public UnityEventVector2 OnMoveAxis;

    [Header("Other")]
    public UnityEvent OnPauseButton;

    void Update()
    {
        if (inputProfile.GetJumpButton()) OnJump.Invoke();
        if (inputProfile.GetPrimaryFireButton()) OnPrimaryFire.Invoke();
        if (inputProfile.GetSecondaryFireButton()) OnSecondaryFire.Invoke();
        if (inputProfile.GetPauseButton()) OnPauseButton.Invoke();
    }
}
    

My Role

Tools

I worked very close to both our game designer and level designer. Our levels were based on moving platforms and the player could affect the movement by giving or extracting time. I saw our level designer struggling while building levels; since he had to jump from the editor to the game view every time he made a change. This is what made me decide to take the role as tools programmer.

 

I worked on a visual component tool that was able to move and add points to a path. I also made a tool to be able to move the whole path, without changing the points. Similar to all the other components we made, I made this tool modular; our level designer was able to put this component to any object to give it the same function.

After doing the platform tool, the level designer asked me if I could make a similar tool to when an object is using a nav mesh.

After watching our level designer using my tools, I realized it would be easier if I added a visual of how the platform would be moving in run time. With this addition, our level designer was able to change the speed of the platforms and see how all of them would be connected from the editor; rather than having to run the game. 


public class PathPoint : MonoBehaviour
{
    [Header("Checks")]
    [Tooltip("True: show/use handles on each points \nFalse: hide the handles")]
    public bool editPath = true;

    [Space(15)]
    public Vector3[] points = new Vector3[1];

    private Vector3 _anchorPoint;

    [HideInInspector] public bool loop = false;

    void Awake()
    {
        _anchorPoint = transform.position;
    }

    // A is the percentage along the path. Find a better name
    public Vector3 GetPercentageAlongPath(float percentage)
    {
        if(_anchorPoint == Vector3.zero) _anchorPoint = transform.position;

        float totalLength = GetLength();
        float length = 0;
        Vector3 tempPos = Vector3.zero;

        for (int i = 0; i < points.Length; i++)
        {
            float currentPercentage = length / totalLength;
            length += (points[i] - points[(i + 1) % points.Length]).magnitude;
            float percentageAlong = length / totalLength;

            if(percentageAlong > percentage)
            {
                tempPos = GetPositionBetweenPoints(points[i], points[(i + 1) % points.Length], 
                    PercentageBetween(currentPercentage, percentageAlong, percentage));
                break;
            }
        }

        tempPos += _anchorPoint;
        return tempPos;
    }

    public Vector3 GetMetersAlongPath(float meters)
    {
        float totalLength = GetLength();
        return GetPercentageAlongPath(meters / totalLength);
    }

    float PercentageBetween(float a, float b, float value)
    {
        return (value - a) / (b - a);
    }

    Vector3 GetPositionBetweenPoints(Vector3 pointA, Vector3 pointB, float percentage)
    {
        return pointA + (pointB - pointA) * percentage;
    }

    public float GetLength()
    {
        float length = 0;
        float pointsLength = loop ? points.Length : points.Length - 1;
        for (int i = 0; i < pointsLength; i++)
        {
            length += (points[i] - points[(i + 1) % points.Length]).magnitude;
        }
        return length;
    }

    public Vector3 GetPointInWorldSpace(int index)
    {
        index = Mathf.Clamp(index, 0, points.Length -1);
        return points[index] + _anchorPoint;
    }
    
    public void SetAnchor(Vector3 position)
    {
        Vector3 deltaMovement = position - transform.position;
        transform.position = position;
        _anchorPoint = position;
        points[0] = Vector3.zero;

        for (int i = 1; i < points.Length; i++)
            points[i] -= deltaMovement;
        
    }

#if UNITY_EDITOR
    private void OnDrawGizmos()
    {
        if(_anchorPoint == Vector3.zero)
            _anchorPoint = transform.position;

        Gizmos.color = Color.blue;

        for (int i = 0; i < points.Length - 1; i++)
            Gizmos.DrawLine(points[i] + _anchorPoint, points[i + 1] + _anchorPoint);
    }
#endif
}

    



[RequireComponent(typeof(PathPoint), typeof(Rigidbody))]
[DisallowMultipleComponent, AddComponentMenu("Movables/Path Follower Movement")]

public class PathFollower : MonoBehaviour
{
    public enum Mode { Loop, PingPong, Once }

    public Mode mode;
    [Tooltip("How many seconds it takes to travel the whole path")] public float duration = 1;
    [Range(0, 1)] public float startOffset;
    public AnimationCurve moveCurve = new AnimationCurve(new Keyframe(0, 0, 1, 1), new Keyframe(1, 1, 1, 1));
    [SerializeField] private bool _showPath = true;

    [SerializeField] UnityEvent onAtStart;
    [SerializeField] UnityEvent onAtEnd;

    Rigidbody _rigidbody;
    float _direction = 1;
    float _gizmoDirection = 1;

    // Path
    PathPoint _path;
    float _pathProgress = 0;
    float _timeScale = 1;

    // Editor
    float _gizmoProgress;
    Vector3 _gizmoCurrentPosition;

    void Start()
    {
        _rigidbody = GetComponent< Rigidbody>();
        _path = GetComponent< PathPoint>();
        _pathProgress = startOffset;
    }

    void Reset()
    {
        _rigidbody = GetComponent< Rigidbody>();
        _rigidbody.isKinematic = true;
        _rigidbody.useGravity = false;
    }

    void FixedUpdate()
    {
        _pathProgress = UpdateProgress(_pathProgress, Time.deltaTime, ref _direction);
        _rigidbody.MovePosition(_path.GetPercentageAlongPath(moveCurve.Evaluate(_pathProgress)));

        if (0 -_pathProgress <= 0.01f)
        {
            onAtStart.Invoke();
        }
        else if (1 - _pathProgress <= 0.01)
        {
            onAtEnd.Invoke();
        }
    }

    float UpdateProgress(float progress, float deltaTime, ref float direction)
    {
        progress += (deltaTime * _timeScale * direction) / duration;

        if (mode == Mode.Loop)
        {
            _path.loop = true;
            progress %= 1;
        }
        else if (mode == Mode.PingPong)
        {
            _path.loop = false;
            progress = Mathf.Clamp01(progress);
            if (progress == 1 || progress == 0)
                direction *= -1;
        }
        else if (mode == Mode.Once)
        {
            _path.loop = false;
            progress = Mathf.Clamp01(progress);
        }

        return progress;
    }

    float _lastTime;
    float _deltaTime;

#if UNITY_EDITOR
    void OnDrawGizmos()
    {
        MeshFilter meshFilter = GetComponent< MeshFilter>();
        float deltaTime = EditorApplication.isPlaying ? Time.deltaTime : Time.fixedDeltaTime / 2;
        _deltaTime = Time.realtimeSinceStartup - _lastTime;
        _lastTime = Time.realtimeSinceStartup;

        _path = GetComponent< PathPoint>();
        Gizmos.color = Color.blue;

        if (!_showPath)
        {
            _gizmoCurrentPosition = _path.GetPercentageAlongPath(moveCurve.Evaluate(startOffset));
            _gizmoProgress = startOffset;
        }
        else
        {
            _gizmoProgress = UpdateProgress(_gizmoProgress, TimeHelper.gizmoDeltaTime, ref _gizmoDirection);
            _gizmoCurrentPosition = _path.GetPercentageAlongPath(moveCurve.Evaluate(_gizmoProgress));
            Gizmos.DrawSphere(_gizmoCurrentPosition, .3f);
        }

        if (meshFilter)
        {
            Gizmos.DrawWireMesh(meshFilter.sharedMesh, 0, _gizmoCurrentPosition, transform.rotation, transform.lossyScale);
            Gizmos.color = new Color(0, 0, 1, .7f);
            Gizmos.DrawMesh(meshFilter.sharedMesh, 0, _gizmoCurrentPosition, transform.rotation, transform.lossyScale);
        }
    }

    void OnDrawGizmosSelected()
    {
        if (!_path.editPath && !EditorApplication.isPlaying)
            _path.SetAnchor(transform.position);
    }
#endif

    public void SetTimeScale(float value)
    {
        _timeScale = value;
    }
}
    



[CustomEditor(typeof(PathPoint))]
public class PathPointEditor : Editor
{
    private Tool _latestTool;
    private GUIStyle _style = new GUIStyle();

    private Vector3 _position;
    private int _currentTarget;
    private PathPoint _pathPoint;

    private bool _oldEditPath;

    public override void OnInspectorGUI()
    {
        EditorGUILayout.HelpBox("OBS: this will not manipulate the time of the object!", MessageType.Warning);
        base.OnInspectorGUI();
    }

    private void OnEnable()
    {
        _style.fontStyle = FontStyle.Bold;
        _style.fontSize = 20;
        _style.normal.textColor = Color.Lerp(Color.white, Color.blue, 0.5f);

        _latestTool = Tools.current;
        Tools.current = Tool.None;

        _pathPoint = target as PathPoint;
    }

    private void OnDisable()
    {
        Tools.current = _latestTool;
    }

    private void OnSceneGUI()
    {
        if (_pathPoint.editPath)
        {
            Tools.current = Tool.None;

            for (int i = 0; i < _pathPoint.points.Length; i++)
            {
                EditorGUI.BeginChangeCheck();

                Vector3 anchorPoint = _pathPoint.transform.position;
                Handles.Label(_pathPoint.points[i] + anchorPoint, i.ToString(), _style);
                Vector3 newPosition = Handles.PositionHandle(_pathPoint.points[i] + 
                    anchorPoint, Quaternion.identity);

                if (EditorGUI.EndChangeCheck())
                {
                    Undo.RecordObject(_pathPoint, "Move Handle");
                    if (i == 0)
                    {
                        _pathPoint.SetAnchor(newPosition);
                    }
                    else
                        _pathPoint.points[i] = newPosition - anchorPoint;
                }
            }
        }
        else
        {
            if (_oldEditPath != _pathPoint.editPath)
            {
                _oldEditPath = _pathPoint.editPath;
                Tools.current = Tool.Move;
            }
        }
    }
}
    

AI

To give the player a guidance we wanted to add a display to act as a companion with a personality. Something I always wanted to do was creating an AI with a mood. I wanted to play around with having an AI who changed mood depending on the player actions.

 

My goal was to create a system that would be user-friendly for our designer to create custom moods for the AI. 

 

 

Response card

I created a scriptable object with all the data the AI would need for a response. It contained important data we would need to decide the behaviours of the AI. This scriptable object was the response data; we would be able to create a response card for each type of response/event the AI could have.

 

Each response card would have:

 

  1. A way of deciding how many responses the AI should have
  2. A way of deciding how many hate responses the AI should have; meaning the responses it could have if its mood was zero.

 

Each response element would have:

 

  1. A “happy barrier”-slider
  2. A list of responses.

 

Each response element had a way for us to decide how many unique responses the AI could have, depending on the “happy barrier”-slider.

 

Example:

If the AI mood is 95 or above out of 100 and we trigger an event; the AI take the data from the card and check the list of corresponding “happy barrier”. The AI would then take a random text-respondse from the corresponding "happy-barrier" to say to the player.

 

To avoid having the AI randomly pick the same text when triggering the same event; I made sure to only trigger a text that had already been used, if the AI had already been through the whole list of responses.

 

I also added functionality to change the text to audio clips, if we had time or wanted to upgrade it in the future.



public class ResponseData : ScriptableObject
{
    [Header("AI Responses"), Tooltip("How many different type of responses depending on specific AI currentAIMood")]
    [SerializeField] private Response[] _moodResponses;

    [Header("AI Is Angry - barrier is 0"), Space(25), Tooltip("The AI will say this if it's mood is 0.")]
    [SerializeField] private Response _hateResponses;

    private void Start() => _hateResponses.happyBarrier = 0;

    private string GetRandomAngryDefault() => _hateResponses._response.Length == 0 ? "..." : 
        _hateResponses._response[Random.Range(0, _hateResponses._response.Length)];

    private AudioClip GetRandomAngryDefaultClip()
    {
        if (_hateResponses._responseClip.Length == 0)
            return null;
        
        else
            return _hateResponses._responseClip[Random.Range(0, _hateResponses._responseClip.Length)];
    }

    public string GetResponse(int currentAIMood)
    {
        List< Response> relevantRespones = new List< Response>();

        relevantRespones.AddRange(_moodResponses.Where(response => response.happyBarrier <= currentAIMood));
        relevantRespones.OrderByDescending(response => response.happyBarrier);

        return relevantRespones.Count == 0 ? GetRandomAngryDefault() : 
            relevantRespones[0].GetResponseText();
    }
   
    public AudioClip GetRelevantResponseClip(int currentAIMood)
    {
        List< Response> relevantRespones = new List< Response>();

        relevantRespones.AddRange(_moodResponses.Where(response => response.happyBarrier <= currentAIMood));
        relevantRespones.OrderByDescending(response => response.happyBarrier);

        return relevantRespones.Count == 0 ? GetRandomAngryDefaultClip() : relevantRespones[0].GetResponseClip();
    }
}

[Serializable]
public struct Response
{
    [Range(0, 100), Tooltip("The barrier for what mood the AI must be to tell the line")]
    public int happyBarrier;

    public string[] _response;
    public AudioClip[] _responseClip;
    
    public string GetResponseText() => _response[Random.Range(0, _response.Length)];

    public AudioClip GetResponseClip() => _responseClip.Length == 0 ? null : _responseClip[Random.Range(0,
        _responseClip.Length)];

    
}
    



public class AIMoodChanger : MonoBehaviour
{
    private AIBehaviourComponent _aiBehaviour;
    private FaceChanger _aiFace;

    private void Start()
    {
        _aiBehaviour = FindObjectOfType< AIBehaviourComponent>();
        _aiFace = FindObjectOfType< FaceChanger>();
    }

    public void IncreaseMood(int moodToAdd)
    {
        _aiFace.UpdateFace();
        if (_aiBehaviour.currentMood.value + moodToAdd > 100)
            _aiBehaviour.currentMood.value = 100;

        else
            _aiBehaviour.currentMood.value += moodToAdd;
    }

    public void DecreaseMood(int moodToReduce)
    {
        _aiFace.UpdateFace();
        if (_aiBehaviour.currentMood.value - moodToReduce < 0)
            _aiBehaviour.currentMood.value = 0;

        else
            _aiBehaviour.currentMood.value -= moodToReduce;
    }
}
    

Events & Editor

 

To make the AI call a specific response card depending on events I sat down with one of our designers to talk about the kind of events we could add. I then created the events (e.g. player death, projectile misses) we discussed and made the connections with the response cards. I made it easy for our game designer to add new events and connections to corresponding card.

 

To make it easy for our game designer to modify or create a new response card, I made a custom editor to group all the data elements. After finishing the custom editor, I showed it to the designer that would use it; to see if it felt

user friendly. Just because I find something obvious doesn’t mean someone else will. I got feedback on parts that was unclear and then I changed it accordantly.

 

One of our artists had the great idea to give the AI a face that would change depending on an action and its mood. He ran that by me to see how we could make it work in a natural way. I added a function to change a sprite depending on the mood or event.


The image above is showing how the AI looked in the early stages; in this case the AI is very angry and is showing it both in text and image.


The image bellow is showing how it looked in a later stage.



    private void Start()
    {
        _aiPrinter = GetComponentInChildren< PrintAIText>();
        _source = GetComponent< AudioSource>();

        if (!_source)
        {
            Debug.LogError("There is no " + _source.name + " connected to " + this);
            return;
        }

        if (!_aiPrinter)
        {
            Debug.LogError("There is no " + _aiPrinter.name + " as I child in " + this);
            return;
        }

        if (!currentMood)
        {
            Debug.LogError("There is no " + currentMood.name + " connected to " + this);
            return;
        }
        
        /// Connect Events
        {
            if (!deathEvent || !finishSectionEvent  || !standingStillEvent)
            {
                Debug.LogError("The " + this + " are missing event connection");
                return;
            }

            deathEvent.myEvent += OnDeath;
            finishSectionEvent.myEvent += OnFinishSection;
            tooLateEvent.myEvent += OnTooLate;
            standingStillEvent.myEvent += OnStandingStill;
        }
    }

    public void React(ResponseData data)
    {
        _aiPrinter.WriteResponse(data.GetResponse(currentMood.value));

        AudioClip clip = data.GetRelevantResponseClip(currentMood.value);
        if(!clip) return;

        _source.PlayOneShot(clip);
    }
    
    #region Events
    private void OnDeath() => React(deathResponse);
    private void OnFinishSection() => React(finishSectionResponse);
    private void OnTooLate() => React(tooLateResponse);
    private void OnStandingStill() => React(standingStillResponse);
    
    #endregion
    
}
    



   [CustomEditor(typeof(AIBehaviourComponent))]
public class AIBehaviourComponentEditor : Editor
{
    private bool _visibleEvents = false;
    private bool _visibleRersponses = false;
    private bool _visibleFaces = false;

    public override void OnInspectorGUI()
    {

        DrawDefaultInspector();
        SerializedObject so = serializedObject;

        // Current Mood
        {
            SerializedProperty propertyCurrentMood = so.FindProperty("currentMood");
            so.Update();

            GUILayout.Space(15);

            GUILayout.BeginVertical(EditorStyles.helpBox);
            {
                EditorGUILayout.PropertyField(propertyCurrentMood);
            }
            GUILayout.EndVertical();
            so.ApplyModifiedProperties();
        }

        // Events
        {
            SerializedProperty propertyDeathEvent = so.FindProperty("deathEvent");
            SerializedProperty propertyFinishSectionEvent = so.FindProperty("finishSectionEvent");
            SerializedProperty propertyTooLateEvent = so.FindProperty("tooLateEvent");
            SerializedProperty propertyStandingStillEvent = so.FindProperty("standingStillEvent");

            so.Update();

            GUILayout.Space(15);
            GUILayout.Label("Events:", EditorStyles.boldLabel);
            _visibleEvents = EditorGUILayout.Foldout(_visibleEvents, _visibleEvents ? "Hide" : "Show");

            if (_visibleEvents)
            {
                GUILayout.BeginVertical(EditorStyles.helpBox);
                {
                    EditorGUILayout.PropertyField(propertyDeathEvent);
                    EditorGUILayout.PropertyField(propertyFinishSectionEvent);
                    EditorGUILayout.PropertyField(propertyTooLateEvent);
                    EditorGUILayout.PropertyField(propertyStandingStillEvent);
                }
                GUILayout.EndVertical();
            }
            so.ApplyModifiedProperties();
        }

        // AI Responses
        {
            SerializedProperty propertyDeathResponse = so.FindProperty("deathResponse");
            SerializedProperty propertyFinishSectionResponse = so.FindProperty("finishSectionResponse");
            SerializedProperty propertyTooLateResponse = so.FindProperty("tooLateResponse");
            SerializedProperty propertyStandingStillResponse = so.FindProperty("standingStillResponse");


            so.Update();

            GUILayout.Space(10);
            GUILayout.Label("Responses:", EditorStyles.boldLabel);
            _visibleRersponses = EditorGUILayout.Foldout(_visibleRersponses, _visibleRersponses ? "Hide" : "Show");

            if (_visibleRersponses)
            {
                GUILayout.BeginVertical(EditorStyles.helpBox);
                {
                    EditorGUILayout.PropertyField(propertyDeathResponse);
                    EditorGUILayout.PropertyField(propertyFinishSectionResponse);
                    EditorGUILayout.PropertyField(propertyTooLateResponse);
                    EditorGUILayout.PropertyField(propertyStandingStillResponse);
                }
                GUILayout.EndVertical();
            }
            so.ApplyModifiedProperties();
        }

    }
}

    



public Sprite GetCurrentFace()
    {
        if (currentMood.value <= _faceBarrier.x)
            return _angryFaces[Random.Range(0, _angryFaces.Length)];

        if (currentMood.value >= _faceBarrier.y)
            return _happyFaces[Random.Range(0, _happyFaces.Length)];

        else
            return _mediumFaces[Random.Range(0, _mediumFaces.Length)];
    }
    
    

AI text

 

When I printed the text on the AI, I wanted it to be printed letter by letter; partly to make it easier to read and also to make the AI feel more alive. I also added variables to adjust the delay of each letter, dot, comma and at the end of each text chunk. I liked the look it gave when the AI was sassy and had three slow dots after each other - it really added to its personality and mood, even if it was only some text.



public class PrintAIText : MonoBehaviour
{
    [Header("Delays")]
    [Tooltip("Delay between new letter")]
    [SerializeField] private float _delay = 0.03f;
    [Tooltip("Delay after dot")]
    [SerializeField] private float _dotDelay = 0.6f;
    [Tooltip("Delay after comma")]
    [SerializeField] private float _commaDelay = 0.3f;
    [Tooltip("Delay after all text is done")]
    [SerializeField] private float _doneDelay = 0.4f;
    [Tooltip("Delay before starting text (so the animation can be done)")]
    [SerializeField] private float _startDelay = 0.4f;

    [Header("Events"), Space(10)]
    [SerializeField] private UnityEvent _onPrintMessage;
    [SerializeField] private UnityEvent _onFinishedTypingMessage;

    private TMP_Text _textMesh;

    private bool _isTyping = false;
    private Queue< string> _triggeredResponseQueue = new Queue< string>();

    private void Start()
    {
        _textMesh = GetComponent< TMP_Text>();
        if (!_textMesh)
        {
            Debug.LogError("There is no " + _textMesh + " in " + this);
            return;
        }
    }


    private IEnumerator ShowText()
    {
        string text = _triggeredResponseQueue.Dequeue();
   
            _isTyping = true;
        yield return new WaitForSeconds(_startDelay);
        for (int i = 0; i < text.Length; i++)
        {
            _textMesh.text = text.Substring(0, i + 1);

            switch (text.ToCharArray()[i])
            {
                case '.':
                    yield return new WaitForSeconds(_dotDelay);
                    break;

                case ',':
                    yield return new WaitForSeconds(_commaDelay);
                    break;

                case '!':
                    yield return new WaitForSeconds(_commaDelay);
                    break;

                case '?':
                    yield return new WaitForSeconds(_dotDelay);
                    break;

                default:
                        yield return new WaitForSeconds(_delay);
                    break;
            }
        }
        yield return new WaitForSeconds(_doneDelay);
        if (_triggeredResponseQueue.Count > 0)
            StartCoroutine(ShowText());
        
        else
        {
            _isTyping = false;
            _onFinishedTypingMessage.Invoke();
        }
    }

    public void WriteResponse(string respons)
    {
        _triggeredResponseQueue.Enqueue(respons);

        if (_isTyping) return;

        _onPrintMessage.Invoke();
        _textMesh.text = "";
        StartCoroutine(ShowText());
    }
}
    
    

Miscellaneous

Menus

I was also responsible for all menus and their functionalities. I worked closely with our UI artist and implemented their designs into the game.


When the pause menu appeared; I created a simple shader to blur out the background of the screen.

Dynamic Crosshair

The artists wanted a simple dynamic crosshair. They didn’t have time to create and animate it, so I did it while following the instructions from our UI artist.

Audio Manager

I created three audio mixers; Master, SFX and Ambient. When the player moved the volume slider we also wanted to show the percentage of the slider, so I added that functionality.

 

The SetVolume script connects all audio mixers to its functionality.


I also created a sound component we could add to any object to get a sound.



public class SetVolume : MonoBehaviour
{
    [SerializeField] private AudioMixer _audioMixer;
    [SerializeField] private TextMeshProUGUI _percentText;

    private Slider _slider;

    void Start()
    {
        _slider = GetComponent< Slider>();
        _percentText.text = Mathf.RoundToInt(_slider.value * 10) + "%";

        _slider.value = 100f;
    }

    public void SetVolumeMaster(float sliderValue) => _audioMixer.SetFloat("MasterVolume", Mathf.Log10(sliderValue) * 20); 

    public void SetVolumeSFX(float sliderValue) => _audioMixer.SetFloat("SFXVolume", Mathf.Log10(sliderValue) * 20); 

    public void SetVolumeAmbient(float sliderValue) => _audioMixer.SetFloat("AmbientVolume", Mathf.Log10(sliderValue) * 20);

    public void UpdatePercentage(float value) => _percentText.text = Mathf.RoundToInt(value * 100) + "%";
    
    



[DisallowMultipleComponent, RequireComponent(typeof(AudioSource)), AddComponentMenu("Sound/Sound Manager")]
public class SoundComponent : MonoBehaviour
{
    [SerializeField] AudioSource _source;
    public Vector2 volumeMinMax = new Vector2(0.35f, 0.45f);
    public Vector2 pitchMinMax = new Vector2(0.52f, 0.6f);
    public AudioMixerGroup output;

    private void Start()
    {
        if (!_source)
        {
            Debug.LogError("There is no Audio Source in " + gameObject.name);
            return;
        }
    }

    public void PlaySound(AudioClip clip)
    {
        if (!clip)
        {
            Debug.LogError("There is no Autio Clip Source in " + gameObject.name);
            return;
        }
        else
        {
            float volume = Random.Range(volumeMinMax.x, volumeMinMax.y);
            float pitch = Random.Range(pitchMinMax.x, pitchMinMax.y);
            _source.volume = volume;
            _source.pitch = pitch;
            _source.clip = clip;
            _source.outputAudioMixerGroup = output;

            _source.Play();
        }
    }

    public void PlaySoundOneShot(AudioClip clip)
    {
        if (!clip) return;

        GameObject go = new GameObject();
        go.transform.position = transform.position;
        AudioSource audioSource = go.AddComponent< AudioSource>();

        float volume = Random.Range(volumeMinMax.x, volumeMinMax.y);
        float pitch = Random.Range(pitchMinMax.x, pitchMinMax.y);

        audioSource.volume = volume;
        audioSource.pitch = pitch;
        audioSource.clip = clip;
        audioSource.outputAudioMixerGroup = output;
        audioSource.Play();
        Destroy(go, clip.length);
    }
}

    
    

The Process

Since we were 12 people working in a group including me, we created three roles; Product Owner, Scrum Master and Art Lead. On of the most important things, in my opinion, is communication and is always the hardest task to follow. I took the role as Scrum Master and during each daily stand up I went through a Physical Scrum Board I created and made sure everyone (including me) had a task. If a task felt too big, we talked about it and decided how important it was for the game; if not we could put it on hold.

Challenges

It was a big challenge to work in such a big team, especially since it was a school project where everyone had different ideas and opinions of how we wanted the game to be. Despite that we manage to come up with a game base quite fast.

 

One challenge we had to overcome was the amount of absence in the group. It became challenging to make the game we had planned for when a big part of the group wasn’t always able make it. We solved it by helping each other with tasks and held a meeting where we went through everything that was left to do. We kept everything that was important for the game loop; to be able to have a playable game. The rest was put on hold.

 

Something we wanted to have in our game was the player companion. As mentioned before, this AI would say/write something in different ways depending on its mood and how it felt about the player. Depending on the player actions its mood would change. This is a feature our group and a lot of play testers liked.

 

Unfortunately, one of the things we had to scrap was this AI. A big reason for it was partly because we didn’t have time to create a mesh for it but also because we didn’t have time to create all different type of text it could say. This was a hard decision to make from my part, since I was the one who made the AI functionality. But it’s important to make decisions based on what is best for the team, even if it can be hard.

Iteration

It took a lot of iterations on the game before we found something we all felt was fun. The idea from the beginning was to be able to give and take time to different objects. In theory it felt fun, but when we made the base we realized all our game was doing was turning on and off objects - which we didn’t find amusing.

 

I also iterated the way I held scrum meetings. This was something I had never done before with such a large group of 12. Most of the group wanted a scrum board on the computer but after trying it during one meeting, I realized no one ever checked it. It also made it hard during scrum meetings and stand ups to get an overview of what everyone was doing. I created two big physical boards with post-its; one scrum board for all the story boards of the overall project and one with all tasks for a two-week-sprint.

 

This change gave us all an easier way to see what was going on and who to talk to about a specific task. This also made us realize we must reprioritize the tasks to be able to have a finished product.

 

As I’ve already explained; I also iterated my work every time I worked with tools, to make sure it wasn’t only me who understood it but our level designer that would use it.