How Windows Mixed Reality’s poor reliability forced a kiosk game switch to HTC VIVE

In a previous article, I described the development of VoedingscentrumVR, an educational, kiosk, VR game for Windows Mixed Reality. Although at the time of writing it seemed like it was the end of that game’s development, some surprises crossed our path. Eventually, we ditched Windows Mixed Reality altogether and switched to a HTC VIVE, SteamVR-powered solution and we are happy we did it. Here’s the story behind that shift.

Disclaimer: Most of the events described in this article happened before the COVID-19 pandemic hit the Netherlands and the different testing stages happened either before the restrictions started, or after most of them had been lifted.

A small summary

The application had two released versions. Version 1.0 shipped as a full Windows Mixed Reality (hereafter referred as WMR), using both the headsets and controllers and it was deployed at Boerhaave Museum. We quickly concluded that there were some design flaws and the controllers had to be dropped. Version 1.1 brought Leap Motion support to enable interaction with objects without controllers and it was installed at Open Air Museum. Read the original article for more detailed information about the development of the game.

The problems

Some technical issues regarding the WMR headsets surfaced when version 1.1 was installed at Open Air Museum. Since version 1.0 failed early during the period it was available to the public, the application never ran for long periods of time. As a consequence, some reliability issues had never been a problem.

After the game had been open to public for two weeks, we received some concerning feedback from the museum staff, and two problems were described. First, the booting process (which starts the game automatically) was often failing because the WMR headset was not being recognized and the WMR Portal displayed an error message. Second, the headset would often flicker, the sound coming out of the headphones would glitch and eventually the headset screen would turn black.

After some testing with other WMR headsets, we came to the conclusion that the Samsung headset broke. We replaced it with a HP headset, which seemed to fix the flickering problem. At this point, we thought that the headset replacement also fixed the booting problem, but that was not the case. After a couple of days had passed, the client told us the booting problem persisted. Some investigation showed us that the cause was the same as before the hardware replacement: the headset wasn’t being recognized. After a lot of research and reaching out to Microsoft support, we concluded that the USB extension cable we were using was not suitable for our use because it couldn’t deliver enough power to keep the headset on. We then bought a powered USB 3 hub alongside an active USB 3 extension, trying to solve the power problem. But unfortunately it didn’t. We then acquired the only USB 3 hub Microsoft cites in their support page as “known to work well with Windows Mixed Reality”. It was slightly more reliable (the error rate was lower), but it didn’t solve the problem.

We went back to researching possible solutions to the problem, and that’s when we found a weird, yet somewhat effective solution: kill the explorer.exe process and restart it. We created a startup script that would kill the process, open it up again, and then load the game. This seemed to solve the connectivity and discovery problem and most of the system boots were successful.

Unfortunately, yet again, we received an email from the client a week later saying the connectivity problem had returned. At this point, both the client’s and our faith in the current solution were diminished by the repeated failed attempts to keep the game running uninterruptedly. We decided to try a platform switch, dropping the not-so-mature WMR in favor of a platform known for its reliability and robustness: the HTC VIVE running over SteamVR.

The platform switch

We expected the switch to be a long and cumbersome process but – as described on the sections below – it went faster and smoother than we anticipated.

Software

A platform switch often hides unexpected problems that we never anticipate when planning it. Although we were prepared for many surprises when developing the new version for the VIVE and SteamVR, the switch was easier than we feared.

A quick recap on the technical aspects of the development process: the application was developed using the Unity engine and Virtual Reality Toolkit (VRTK).

VRTK proved to be a wise choice made early in development because it supported both WMR and SteamVR platforms with their Unity plugin. Thus, the platform migration wasn’t expected to be as troublesome as it could have been. But another early development decision made the transition almost seamless. WMR lacked some advanced settings – like kiosk mode and custom idling timeout – that were crucial for our application. When looking for solutions for these demands, we discovered that we could run WMR applications on top of SteamVR with the help of a WMR plugin for SteamVR available on the Steam store. By doing so, we could have access to more advanced settings in the SteamVR dashboard while still using WMR hardware. We tested this solution and it met our needs at the time so we kept it in the game.

As a consequence, the application already run on top of SteamVR and no software change (with the exception of a play area recalibration) was necessary to port it to the HTV VIVE. No other surprises popped up and the platform switch took only a couple of hours, overcoming our most optimistic expectations.

It is worth noting that since our game did not use the WMR controllers for interaction, we did not test it during the migration. Thus, we can not testify on the potential challenges regarding the controllers on such a switch.

Hardware

We had some previous development experience with the HTC VIVE and we owned a set for development purposes. We used this set during development but acquired another one to be used on the installation. HTC doesn’t sell new VIVE sets anymore, but they did sell certified pre-owned VIVE sets, which we ended up buying. The set worked flawlessly and met all our expectations.

The game used a Leap Motion as input device for interacting with VR objects. The HTC VIVE has been used alongside the Leap Motion in numerous projects, so we did not expect any challenges on that front. This proved to be true as gluing the Leap Motion to the VIVE headset was easily done and cable management was aided by the headset’s supporting straps and their cable holders.

The VIVE cables are longer than the WMR’s, but cable extensions were still necessary to meet the game demands. We replaced the short HDMI and USB cables that connect the VIVE’s setup box to the PC with the active HDMI and USB 3 extensions previously used with the WMR setup. Additionally, a long power extension was used to power the setup box.

The outcome

The main goal of the platform switch was to improve the game’s reliability over long periods of time. Accordingly, we ran preliminary tests in which we left the game running for hours (sometimes overnight) in the office and tried to play it randomly during the day. All of our testes played as expected: the headset always was tracked, its screen never flickered or turned off unexpectedly and the hand tracking performed satisfactorily. Summarizing: the game worked for long periods of time.

We then moved to the next stage in testing: a field test exposed to the public. We installed the game at Corpus Museum for a trial run of two weeks. During this period, the game was turned on right before the museum’s opening time and it was shut down right after its closing time. At the end of the two weeks, we finally received positive feedback from the museum staff. The game ran well and without any downtime as it was observed on the previous locations. In addition, the museum staff didn’t have to run any VR play area recalibration – something that was common with WMR.

We are now confident that spending some time to migrate the solution from WMR to the HTC VICE paid off. The system’s robustness and reliability improved significantly, finally meeting our standards.

Some small interaction tweaks

During the test period at the museum, it became apparent that the hand grab interaction assisted by the Leap Motion was not as good as we expected. Most children initially struggled to grab items in the VR world because the VR hands would not close when their actual hands did. Some got used to the mechanics and how the sensor reacted to their movements, but others did not and it was clear that it harmed their experience.

We came back to the drawing board and tried to rethink the hand interaction. So far we’ve tried to mimic the real world as much as we could, but it was proven to be inefficient. Usually, we want the VR objects to replicate behaviors of real world objects, but sometimes that is not only unnecessary, but also undesired. Part of the magic of VR is to create and experience worlds and situations that are not possible in the real, physical universe. After reminding us of this point, we freed ourselves from the “physically impossible” chains and started to brainstorm interaction solutions that could be inaccurate in a non-VR settings, but more pleasant for our players.

Two game interactions were suffering from the issue described above:

  • Grabbing food from a serving tray and placing it on the feeding plate so the creature could eat it.
  • Grabbing balls from a box and throwing them so the creature could fetch them. Alternatively, the player could place the ball in a cannon that would shoot it. This interaction suffered the most because the hand tracking issue would impact both the grabbing and the throwing.

Once we’ve realized that our task was to aid the players to interact with the VR world – and not to create the most realistic experience ever – it was easy to come up with solutions. We decided to remodel the two interactions differently.

For the first one (the food selection interaction), we decided to let players choose items by simply touching them, regardless of the hand pose. Once the hand touched an item, the item would snap to the player’s hand. Then, the player could choose what to do: put it back on the food tray or place it on the feeding plate. As a nice side effect, this change eliminated another existing problem: food items that were thrown out of player’s reach sometimes could not be retrieved. Since throwing food items was not possible anymore, this problem ceased to exist.

We fixed the second interaction (grabbing and shooting balls) by eliminating the need to grab balls altogether. The shooting cannon was modified so it would have a button that would shoot balls when pressed. There was no need to feed the cannon with balls and it had an endless ball supply. Therefore, the players could still shoot some balls and watch the creature fetch them, but they would not directly interact with balls anymore, eliminating the interaction issue.

Internal tests showed that the new interactions allowed for a smoother, more effortless gameplay experience, without hurting the educational and fun aspects of the game. Preliminary external tests confirmed what we experienced in the office: the changes made interacting with VR items a joy instead of a hassle. As a consequence, children left the play session happy and amazed, instead of frustrated. Finally, once the pandemic numbers went down and the restrictions were lifted, we were able to test the game in the field. The entire structure was setup at Rosmalen’s Library and children were welcomed to play. After a few weeks running smoothly, with not hardware problems whatsoever, we were glad to find out that children (and some parents) loved the game and groups of children were visiting the library specifically to try out Voedingscentrum VR.

Conclusion

In this article we saw how a platform switch from Windows Mixed Reality to HTC VIVE via SteamVR fixed some reliability problems in a VR game. We also saw how reimagining hand interactions by abandoning physical world constraints improved the gameplay experience. In the end of the journey described on these two articles, we can safely say we have built the game we’ve imagined in the beginning of the development process.

When software internationalization isn’t just about UI: a tale of how a parsing error crashed our game

Whenever we talk about adapting a game to different countries, the first thing we often think of is localization, but we sometimes neglect its sibling: internationalization. Wait, what’s the difference again? Internationalization is the process of designing and developing your software so it can easily be adapted and used in different, countries, cultures and languages. Localization is the process of adapting an existing software to be used in a new country, culture or language, usually by means of translating text and or/adding components that are relevant to the new environment. Even though localization uses the tools provided by internationalization to deliver its work, internationalization’s role is not to simply assist localization, as we will find out soon. In this article I will discuss how a software internationalization bug crashed our game, how hard it was to unearth the source of the error and how easy it was to fix it.

How every bug starts

Someday, a few days after we released a new version of our mobile application, we started receiving bug reports on one if its mini games. Our application consisted of 2 devices that communicate with each other: a dashboard and a client. Game data is exchanged between them at the start of every mini game to ensure that both ends generate the same world. The bug report described that the dashboard application froze right when one of the mini games started, and eventually quit. The other mini games were unaffected and so was the client application. We had experienced that before: it sounded like a memory problem that forced the OS to kill the app. We tried to replicate the bug, but failed every time. We used the same devices (iPad Air and Oculus Go) and the same OS version as the customers, and we still could not reproduce the crash. Yet, our customers reported that the crash was consistently happening whenever they tried to play that specific mini game. We were almost giving up and driving to one of our closest customer’s office to experience the crash firsthand, when we got our hands on a couple of devices that could reproduce the bug consistently.

Time to dig deeper

Once we could reproduce the bug consistently, it was time to pinpoint the source of the crash and finally, of the bug. We generated a development build and installed it into the iPad using XCode’s debug mode, which lets us observe the application’s memory consumption. Just like we first imagined, starting the game caused a RAM surge to the point which the OS killed the application.

Now, the source of the crash was evident: memory consumption. We were left to discover the cause of such high RAM usage. At this point, it seemed like it was just another case of our application growing just a bit too much with the new version (new features, more assets), just enough to hit the device’s memory threshold. We started by investigating the usual suspect of memory hunger: art assets. After some thought, we concluded that assets were probably not the problem because that mini game was one of the least asset-heavy games in the system. Just to test this theory, we ran the app on an iPad which had twice as much RAM as the previous one. To our surprise, the app’s memory consumption also climbed up to the point where the OS killed the app. But this time, the RAM usage was more than twice as much as on the previous iPad. Something was allocating RAM non-stop, and there was no way it was the assets.

The next suspect in the line was code. This mini game had been part of the application for at least one year, thus there must had been a change that broke its logic and is allocating memory like there’s no tomorrow. We checked our repository’s history and… there were no changes in that mini game. At all. Neither code, nor asset changes. Maybe it was not the code after all? Who’s the next suspect in line?

Data. Whenever a game starts, the client application will generate the world and it will send the generational data to the dashboard application. If that data is corrupted, it can make the game perform an absurd task that would endlessly allocate memory. This was not the case because both worlds (the dashboard and the client) were generated using the same data and the application never froze on the client, only on the dashboard. So maybe it was not the data?

Back to ground zero

Disclaimer: for the sake of simplicity, some implementation details are hidden and/or modified.

So far, we had concluded that the bug was caused neither by assets, nor by code nor by corrupted data. What were we left with? Engine bug? All the other mini games ran fine, so it was not likely. At this point we took a step back, stopped analyzing the technical aspects and looked at the game. Which aspects of the gameplay could get out of control to the point which it would consume memory non-stop? It was a simple “connect the dots” game, where the dots formed a sine wave that connected 2 points in space. Depending on the player’s movement capabilities, these 2 points could be closer or farther apart. The spacing between the dots was constant, therefore the sine wave had to be constructed dynamically.

Wait a minute. What if, for some reason, the wave generation never stopped, kept spawning new dots indefinitely and that caused the memory pressure? We could never see that happening because the wave was generated in a single frame, which never was rendered. We added some debug messages in the code and watched the console as the game loaded. Surely enough, hundreds of thousands of dots were instantiated, whereas in a normal game session the number of dots would never go over 100. The game object instantiation stopped only when the OS killed the application. So it was the code.

We dove into the code and found out that the number of dots was calculated based on the distance between the start and end points of the sine wave. We analyzed the algorithm used to distribute the points along the wave and it seemed to be correct. After a few more attempts, we found out that the distance between the wave start and end points was in the order of millions of units. That could had only be true if the start and end points were really, really far apart. But in reality, these points should never be farther than 10 units from each other. We dug a bit deeper and found out that the start and end points were indeed millions of units far from each other on the dashboard, far away from the play area. On the client, the wave was correctly generated and its start and end points were certainly not millions of units apart.

We checked the game data exchanged between dashboard and client, and it seemed to be correct. The start and end point fields contained something like 3.141592 and -4.162192. We tested the same game with an Android tablet as dashboard instead of the iPad. The start and end points were where they should be, just a few units apart from each other, within the player’s field of view. And as expected, no crashes happened on the Android tablet. Maybe it was a platform-dependent bug? Again, we tested with another iPad. The game ran fine, the sine wave was generated as expected and there were no crashes. What was going on here?

We then noticed an otherwise simple detail: the iPad that was reproducing the crashes had its system language set to Dutch while the iPad which ran the game with no problems had it set to English. A suspicion arose: could the bug had been caused by the system language settings? We set the “crashing” device’s system language to English and the crashes stopped. We set it back to Dutch and the crashes were back. We did the opposite on the other device and the same behavior was being reproduced consistently. We called the customer who reported the bug and we confirmed that their iPad’s system was in Dutch. Alright, so we found out why some devices could reproduce the bug and some could not. Now what?

Ladies and gentlemen: the bug

As you may have guessed by now, the problem was caused by a lack of internationalization. Let’s see what happened, exactly. The game data we discussed above (the sine wave start and end points) was sent from the client to the dashboard, where it was used to construct the sine wave. In this game, only the X position coordinates were relevant because the other 2 coordinates were known. The start and end positions’ X coordinates were stored as a colon-separated string with a naive implementation that used ToString(). An example of such string is "3.14159265:-4.162".

The problem with this solution is that it assumes that floats will always turn into strings which separate the integer and fractional parts using dots – which is not always the case. Different countries and cultures might represent decimal numbers using different separators. The USA English (en-US) standard uses dots as separators between integer and fractional parts, which would represent the start and end points as "3.14159265:-4.162". It also uses commas as visual separators to ease the reading of long numbers: 4000000 can be represented as 4,000,000. The Dutch standard (nl-NL) does the exact opposite: commas separate integer/fractional sides and dots are used to ease reading of long numbers. Using the Dutch standard, the same points would be represented as "3,14159265:-4,162". This usually is harmless if the calls to ToString() and float.TryParse() use the same standards. The problem arises when the calls do not use the same standard, which was exactly what happened in our application.

If the client generates the string representation using the en-US standard, the output will be "3.14159265:-4.162". If you split this string into 2 substrings based on the colon and tried to parse each substring using the Dutch standard, you would get 31415928 and -4162 because the Dutch standard sees dots as visual aids, not as separators between integer and fractional parts. As a consequence of this standard mismatch, the start position of the sine wave was 31415928 and the end position was -4162. The algorithm that distributed the dots along the wave instantiated thousands, if not millions of dots between those points, which led to the application hang, high memory consumption and eventual crash. In the end, the bug was caused by both code and data.

How do I fix that?

Fortunately, the bug was easily fixed. The C# standard library recognizes that cultural differences play an important roll and provides a data type called CultureInfo which stores – among other things – how decimal numbers should be represented. The ToString() method has an overload which takes a parameter for this purpose: ToString(IFormatProvider), where CultureInfo implements IFormatProvider. Similar overloads are available for Parse and TryParse. The bug was fixed by replacing the previous calls to ToString and TryParse with their respective culture-sensitive overloads. We used CultureInfo.InvariantCulture as a format provider because it contains invariant culture information that is based on the English language, but not with any country or region.

Method calls with no IFormatProvider use CultureInfo.CurrentCulture (the current thread’s culture info) to specify culture information. If a default value for thread CultureInfos is never set by the application, the system’s locale information will be used. That is why our application behaved differently on iPads with different system language and region settings. If we had specified the CultureInfo of our calls, the system locale information would had not been used. If your application code consistently uses culture-sensitive method overloads instead of the vanilla ones, you are guaranteed to eliminate internationalization errors like the one described in this article.

In our case, using method overloads that specify a format provider is a standard, but that specific case flew under our radar. That could had been avoided if the programmer who wrote that code used an IDE like Rider, which warns users about usages of ToString, Parse and TryParse overloads that do not pass a format provider.

Conclusion

In this article we saw an example of how poor software internationalization went beyond the UI and led to an application crash due to memory consumption. We also learned how to avoid such errors using the tools available in C#’s standard library.

As usual, please leave a comment if you have something to add to the discussion, to point out and error or to simply say hello. Thank you for the (long) read and until next time!

Null Check and Equality in Unity

In some programming languages – like C# – it is a common practice to use comparison operators and functions to check for null references. However, when programming in Unity, there are some particularities to keep in mind that the usual C# programmer usually does not take into consideration. This article is a guide on how these caveats work and how to properly use C#’s equality tools in Unity.

A quick recap of C#’s equality functions and operators

There are three main ways to check for equality in C#: the ReferenceEquals function, the == operator and the Equals function. If you are an experienced C# developer that knows your ways in and out of the language’s equality tools, fell free to skip this section and jump straight to the Unity section.

The ReferenceEquals function

This function is not as famous as the other alternatives, but it is the easier to understand. It’s a static function from the Object class and it takes two object arguments to be compared for equality.

public static bool ReferenceEquals (object objA, object objB);

It returns a bool that represents whether the two arguments have the same reference – that is, the same memory address. It can not be overwritten, which is understandable. It does not check for the object contents and/or data, it only takes their references into account.

The == operator

The == operator can be used for both value and reference types. For built-in value types, it returns whether the values are the same. For user-defined types, they can only be used if the operator has been defined. Here’s an example of a == operator defined for the Coordinates struct. The != operator must also be defined whenever the == is, otherwise a “The operator == requires a matching operator ‘!=’ to also be defined” compilation error will be thrown.

public struct Coordinates
{
    private int _x;
    private int _y;

    public static bool operator ==(Coordinates a, Coordinates b)
    {
        return a._x == b._x && a._y == b._y;
    }

    public static bool operator !=(Coordinates a, Coordinates b)
    {
        return !(a == b);
    }
}

The operator’s behaviour differs a bit for user-defined reference types (a.k.a. objects). A custom == operator can be defined for any reference type, but unlike for value types, you don’t have to define the operator before using it. The reason behind that is because the SystemObject class (which all other reference types inherit from) implements the == operator. The implementation is really simple, and well known: two Objects are considered equal if their references (i.e. their memory addresses) are the same. Its behaviour is the same as the ReferenceEquals function explained above.

Although this might make sense, sometimes we want to implement a custom behaviour for this operator, usually when we want 2 different objects (with different references) to be considered equal if some of their data is the same. Consider the following example with the Person class, where two instances are equals (according to the == operator) if they share the same _id.

public class Person
{
    private string _name;
    private int _id;
    
    public static bool operator ==(Person a, Person b)
    {
        if (ReferenceEquals(a, null) || ReferenceEquals(b, null))
            return false;
        return a._id == b._id;
    }

    public static bool operator !=(Person a, Person b)
    {
        return !(a == b);
    }
}

Not that both arguments are of type Coordinates, so the operator can only be used on objects of that type – and on its subtypes.

The Equals functiom

This function lives in the Object class but unlike ReferenceEquals, it is virtual and can be overwritten by any user-defined type. Its default behaviours for reference types (implemented in the Object class) mimics ReferenceEquals: it checks if the object share the same reference. Its default behaviour for value types (defined in the ValueType class) checks if all fields of both objects are the same. Check its definition below.

public virtual bool Equals (object obj);

Unlike the == operator, it is not static and it only takes 1 parameter of type object which representes the object to check equality against. Also notice that unlike the == operator, the argument is of type object, and not of the same type as we are implementing Equals for. Check the example below, where the function is implemented in the Coordinates class.

public class Coordinates
{

    private int _x;
    private int _y;
    
    public override bool Equals(object obj)
    {
        if (ReferenceEquals(obj, null))
            return false;
        if (obj is Coordinates c)
            return c._x == _x && c._y == _y;
        return false;
    }
}

In addition to checking the parameter for a null reference, it is necessary to cast it into Coordinates before actually checking for equality. It is also worth noting that the == operator can check both parameters for null values, while Equals only checks its only parameter. If the object we are calling Equals on is null, a NullReferenceException will be thrown.

If you want to dive deeper into C#’s equality tools, you might want to check this article out.

Equality in Unity

Out of the three main equality tools C# provides (ReferenceEquals, Equals and ==), only the == operator requires special attention – the other two behave exactly like they do in vanilla C#.

Unity provides a custom implementation of the == operator (and naturally for != as well) for types that inherit from the UnityEngine.Object class (e.g. MonoBehaviour and ScriptableObject). For other types – like a custom class that doesn’t inherit from any other class – C#’s standard implementation will be used. When comparing a UnityEngine.Object against null, the engine not only checks if the operand it null by itself, but it also checks if its underlying entity was destroyed. For example, observe the following sequence of actions:

Assuming we have a MonoBehaviour called ExampleBehaviour, create a new GameObject and attach an instance to it:

var obj = new GameObject("MyGameObject");
var example = obj.AddComponent();

Later on the game, we decide to destroy the ExampleBehaviour‘s instance:

Destroy(example);

And later on, we check the ExampleBehaviour instance for equality against null:

Debug.Log(example == null);

The debug statement above will print “true“. At first, that might seem obvious because we just destroyed that instance, but as I explained on my previous article, the instance’s reference is not null and it was not garbage-collected yet. In fact, it won’t be garbage-collected until the scope it has been defined still exists. What Unity’s custom == operator does in this scenario is to check if the underlying entity has been destroyed, which in this case is true. This behaviour helps programmers identifying objects that have been destroyed but still hold a valid reference.

Other similar operators

A few C# operators have implicit null checks. They are worth investigating here because they behave inconsistently with the == operator.

The null-conditional operators ?. and ?[]

These operators were planned as shortcuts for safe member and element access, respectively. The portion of code following the ?. or ?[] will only be executed if the object they been invoked on is not null. In standard C#, they are the equivalent of executing a similar call wrapped in a null check. For example, the following code, assuming that _dog is not instance of UnityEngine.Object:

if (_dog != null)
    _dog.Bark();

Can be replaced with:

_dog?.Bark();

Although these two code snippets might behave exactly the same in vanilla C#, they behave differently in Unity if _dog is an instance of UnityEngine.Object. Unlike ==, the engine does not have custom implementation for these operators. As a consequence, the first code snippet would check for underlying object destruction whereas the second code snippet would not. If you use the Rider IDE, the warning “Possible unintended bypass of lifetime check of underlying Unity engine object” will be displayed whenever one of these operators are used on a object of a class that inherits from UnityEngine.Object.

The null-coalescing operators ?? and ??=

The ?? operator checks if its left operand is null. If it not is, it returns its left operand. If it is, it returns its right operand. In the example below, assuming that Animal is a class that does not inherit from UnityEngine.Object, a3 will point to a2 because the left operand of ?? (a1) is null.

Animal a1 = null;
Animal a2 = new Animal();
Animal a3 = a1 ?? a2;

It is equivalent to

if (a1 == null)
    a3 = a2;
else
    a3 = a1;

The ??= is an assignment operator that assigns its right operand to its left operand only if its left operand is null. In the example below, a1 will be assigned to a3 only if a1 is null.

Animal a1 = ...
a1 ??= a3;

It is equivalent to

Animal a1 = ...
if (a1 == null)
    a1 = a3

Just like the null-conditional operators, there are no custom implementations of these operators for UnityEngine.Object. As a consequence, if the Animal class from the code snippets above inherited from MonoBehaviour, for example, the implicit null checks would not behave like the null checks using the == operator. Thus, their respective “equivalent” code would not be equivalent anymore. Again, a warning will be displayed in the Rider IDE when using these operands on objects that inherit from UnityEngine.Object.

Wrapping up

Equality operators and functions are basic language constructs present in every C# programmer’s toolset. When developing in standard C#, a programmer should keep in mind how some of these constructs behave differently for value and reference types. When programming in C# for Unity, a developer must also keep in mind how the engine tailored the language’s == and != operators to its ecosystem. In addition, one must keep in mind that some shortcut operators that perform implicit null checks behave inconsistently with the engine’s == operator. With that in mind, a developer should master all these equality tools in order to avoid undesired behaviour. Finally, some IDEs like Rider will warn the programmer about possible pitfalls regarding these operators.

That’s it for today. As always, feel free to leave a comment with questions, corrections, criticism or anything else that you want to add. See you next time!

Source

Passion for Coding: .NET == and .Equals()
Unity Blog: Custom == operator, should we keep it?
Rider: Avoid null comparisons against UnityEngine.Object subclasses
Rider: Possible unintended bypass of lifetime check of underlying Unity engine object