When software internationalization isn’t just about UI: a tale of how a parsing error crashed our game

Whenever we talk about adapting a game to different countries, the first thing we often think of is localization, but we sometimes neglect its sibling: internationalization. Wait, what’s the difference again? Internationalization is the process of designing and developing your software so it can easily be adapted and used in different, countries, cultures and languages. Localization is the process of adapting an existing software to be used in a new country, culture or language, usually by means of translating text and or/adding components that are relevant to the new environment. Even though localization uses the tools provided by internationalization to deliver its work, internationalization’s role is not to simply assist localization, as we will find out soon. In this article I will discuss how a software internationalization bug crashed our game, how hard it was to unearth the source of the error and how easy it was to fix it.

How every bug starts

Someday, a few days after we released a new version of our mobile application, we started receiving bug reports on one if its mini games. Our application consisted of 2 devices that communicate with each other: a dashboard and a client. Game data is exchanged between them at the start of every mini game to ensure that both ends generate the same world. The bug report described that the dashboard application froze right when one of the mini games started, and eventually quit. The other mini games were unaffected and so was the client application. We had experienced that before: it sounded like a memory problem that forced the OS to kill the app. We tried to replicate the bug, but failed every time. We used the same devices (iPad Air and Oculus Go) and the same OS version as the customers, and we still could not reproduce the crash. Yet, our customers reported that the crash was consistently happening whenever they tried to play that specific mini game. We were almost giving up and driving to one of our closest customer’s office to experience the crash firsthand, when we got our hands on a couple of devices that could reproduce the bug consistently.

Time to dig deeper

Once we could reproduce the bug consistently, it was time to pinpoint the source of the crash and finally, of the bug. We generated a development build and installed it into the iPad using XCode’s debug mode, which lets us observe the application’s memory consumption. Just like we first imagined, starting the game caused a RAM surge to the point which the OS killed the application.

Now, the source of the crash was evident: memory consumption. We were left to discover the cause of such high RAM usage. At this point, it seemed like it was just another case of our application growing just a bit too much with the new version (new features, more assets), just enough to hit the device’s memory threshold. We started by investigating the usual suspect of memory hunger: art assets. After some thought, we concluded that assets were probably not the problem because that mini game was one of the least asset-heavy games in the system. Just to test this theory, we ran the app on an iPad which had twice as much RAM as the previous one. To our surprise, the app’s memory consumption also climbed up to the point where the OS killed the app. But this time, the RAM usage was more than twice as much as on the previous iPad. Something was allocating RAM non-stop, and there was no way it was the assets.

The next suspect in the line was code. This mini game had been part of the application for at least one year, thus there must had been a change that broke its logic and is allocating memory like there’s no tomorrow. We checked our repository’s history and… there were no changes in that mini game. At all. Neither code, nor asset changes. Maybe it was not the code after all? Who’s the next suspect in line?

Data. Whenever a game starts, the client application will generate the world and it will send the generational data to the dashboard application. If that data is corrupted, it can make the game perform an absurd task that would endlessly allocate memory. This was not the case because both worlds (the dashboard and the client) were generated using the same data and the application never froze on the client, only on the dashboard. So maybe it was not the data?

Back to ground zero

Disclaimer: for the sake of simplicity, some implementation details are hidden and/or modified.

So far, we had concluded that the bug was caused neither by assets, nor by code nor by corrupted data. What were we left with? Engine bug? All the other mini games ran fine, so it was not likely. At this point we took a step back, stopped analyzing the technical aspects and looked at the game. Which aspects of the gameplay could get out of control to the point which it would consume memory non-stop? It was a simple “connect the dots” game, where the dots formed a sine wave that connected 2 points in space. Depending on the player’s movement capabilities, these 2 points could be closer or farther apart. The spacing between the dots was constant, therefore the sine wave had to be constructed dynamically.

Wait a minute. What if, for some reason, the wave generation never stopped, kept spawning new dots indefinitely and that caused the memory pressure? We could never see that happening because the wave was generated in a single frame, which never was rendered. We added some debug messages in the code and watched the console as the game loaded. Surely enough, hundreds of thousands of dots were instantiated, whereas in a normal game session the number of dots would never go over 100. The game object instantiation stopped only when the OS killed the application. So it was the code.

We dove into the code and found out that the number of dots was calculated based on the distance between the start and end points of the sine wave. We analyzed the algorithm used to distribute the points along the wave and it seemed to be correct. After a few more attempts, we found out that the distance between the wave start and end points was in the order of millions of units. That could had only be true if the start and end points were really, really far apart. But in reality, these points should never be farther than 10 units from each other. We dug a bit deeper and found out that the start and end points were indeed millions of units far from each other on the dashboard, far away from the play area. On the client, the wave was correctly generated and its start and end points were certainly not millions of units apart.

We checked the game data exchanged between dashboard and client, and it seemed to be correct. The start and end point fields contained something like 3.141592 and -4.162192. We tested the same game with an Android tablet as dashboard instead of the iPad. The start and end points were where they should be, just a few units apart from each other, within the player’s field of view. And as expected, no crashes happened on the Android tablet. Maybe it was a platform-dependent bug? Again, we tested with another iPad. The game ran fine, the sine wave was generated as expected and there were no crashes. What was going on here?

We then noticed an otherwise simple detail: the iPad that was reproducing the crashes had its system language set to Dutch while the iPad which ran the game with no problems had it set to English. A suspicion arose: could the bug had been caused by the system language settings? We set the “crashing” device’s system language to English and the crashes stopped. We set it back to Dutch and the crashes were back. We did the opposite on the other device and the same behavior was being reproduced consistently. We called the customer who reported the bug and we confirmed that their iPad’s system was in Dutch. Alright, so we found out why some devices could reproduce the bug and some could not. Now what?

Ladies and gentlemen: the bug

As you may have guessed by now, the problem was caused by a lack of internationalization. Let’s see what happened, exactly. The game data we discussed above (the sine wave start and end points) was sent from the client to the dashboard, where it was used to construct the sine wave. In this game, only the X position coordinates were relevant because the other 2 coordinates were known. The start and end positions’ X coordinates were stored as a colon-separated string with a naive implementation that used ToString(). An example of such string is "3.14159265:-4.162".

The problem with this solution is that it assumes that floats will always turn into strings which separate the integer and fractional parts using dots – which is not always the case. Different countries and cultures might represent decimal numbers using different separators. The USA English (en-US) standard uses dots as separators between integer and fractional parts, which would represent the start and end points as "3.14159265:-4.162". It also uses commas as visual separators to ease the reading of long numbers: 4000000 can be represented as 4,000,000. The Dutch standard (nl-NL) does the exact opposite: commas separate integer/fractional sides and dots are used to ease reading of long numbers. Using the Dutch standard, the same points would be represented as "3,14159265:-4,162". This usually is harmless if the calls to ToString() and float.TryParse() use the same standards. The problem arises when the calls do not use the same standard, which was exactly what happened in our application.

If the client generates the string representation using the en-US standard, the output will be "3.14159265:-4.162". If you split this string into 2 substrings based on the colon and tried to parse each substring using the Dutch standard, you would get 31415928 and -4162 because the Dutch standard sees dots as visual aids, not as separators between integer and fractional parts. As a consequence of this standard mismatch, the start position of the sine wave was 31415928 and the end position was -4162. The algorithm that distributed the dots along the wave instantiated thousands, if not millions of dots between those points, which led to the application hang, high memory consumption and eventual crash. In the end, the bug was caused by both code and data.

How do I fix that?

Fortunately, the bug was easily fixed. The C# standard library recognizes that cultural differences play an important roll and provides a data type called CultureInfo which stores – among other things – how decimal numbers should be represented. The ToString() method has an overload which takes a parameter for this purpose: ToString(IFormatProvider), where CultureInfo implements IFormatProvider. Similar overloads are available for Parse and TryParse. The bug was fixed by replacing the previous calls to ToString and TryParse with their respective culture-sensitive overloads. We used CultureInfo.InvariantCulture as a format provider because it contains invariant culture information that is based on the English language, but not with any country or region.

Method calls with no IFormatProvider use CultureInfo.CurrentCulture (the current thread’s culture info) to specify culture information. If a default value for thread CultureInfos is never set by the application, the system’s locale information will be used. That is why our application behaved differently on iPads with different system language and region settings. If we had specified the CultureInfo of our calls, the system locale information would had not been used. If your application code consistently uses culture-sensitive method overloads instead of the vanilla ones, you are guaranteed to eliminate internationalization errors like the one described in this article.

In our case, using method overloads that specify a format provider is a standard, but that specific case flew under our radar. That could had been avoided if the programmer who wrote that code used an IDE like Rider, which warns users about usages of ToString, Parse and TryParse overloads that do not pass a format provider.

Conclusion

In this article we saw an example of how poor software internationalization went beyond the UI and led to an application crash due to memory consumption. We also learned how to avoid such errors using the tools available in C#’s standard library.

As usual, please leave a comment if you have something to add to the discussion, to point out and error or to simply say hello. Thank you for the (long) read and until next time!

Null Check and Equality in Unity

In some programming languages – like C# – it is a common practice to use comparison operators and functions to check for null references. However, when programming in Unity, there are some particularities to keep in mind that the usual C# programmer usually does not take into consideration. This article is a guide on how these caveats work and how to properly use C#’s equality tools in Unity.

A quick recap of C#’s equality functions and operators

There are three main ways to check for equality in C#: the ReferenceEquals function, the == operator and the Equals function. If you are an experienced C# developer that knows your ways in and out of the language’s equality tools, fell free to skip this section and jump straight to the Unity section.

The ReferenceEquals function

This function is not as famous as the other alternatives, but it is the easier to understand. It’s a static function from the Object class and it takes two object arguments to be compared for equality.

public static bool ReferenceEquals (object objA, object objB);

It returns a bool that represents whether the two arguments have the same reference – that is, the same memory address. It can not be overwritten, which is understandable. It does not check for the object contents and/or data, it only takes their references into account.

The == operator

The == operator can be used for both value and reference types. For built-in value types, it returns whether the values are the same. For user-defined types, they can only be used if the operator has been defined. Here’s an example of a == operator defined for the Coordinates struct. The != operator must also be defined whenever the == is, otherwise a “The operator == requires a matching operator ‘!=’ to also be defined” compilation error will be thrown.

public struct Coordinates
{
    private int _x;
    private int _y;

    public static bool operator ==(Coordinates a, Coordinates b)
    {
        return a._x == b._x && a._y == b._y;
    }

    public static bool operator !=(Coordinates a, Coordinates b)
    {
        return !(a == b);
    }
}

The operator’s behaviour differs a bit for user-defined reference types (a.k.a. objects). A custom == operator can be defined for any reference type, but unlike for value types, you don’t have to define the operator before using it. The reason behind that is because the SystemObject class (which all other reference types inherit from) implements the == operator. The implementation is really simple, and well known: two Objects are considered equal if their references (i.e. their memory addresses) are the same. Its behaviour is the same as the ReferenceEquals function explained above.

Although this might make sense, sometimes we want to implement a custom behaviour for this operator, usually when we want 2 different objects (with different references) to be considered equal if some of their data is the same. Consider the following example with the Person class, where two instances are equals (according to the == operator) if they share the same _id.

public class Person
{
    private string _name;
    private int _id;
    
    public static bool operator ==(Person a, Person b)
    {
        if (ReferenceEquals(a, null) || ReferenceEquals(b, null))
            return false;
        return a._id == b._id;
    }

    public static bool operator !=(Person a, Person b)
    {
        return !(a == b);
    }
}

Not that both arguments are of type Coordinates, so the operator can only be used on objects of that type – and on its subtypes.

The Equals functiom

This function lives in the Object class but unlike ReferenceEquals, it is virtual and can be overwritten by any user-defined type. Its default behaviours for reference types (implemented in the Object class) mimics ReferenceEquals: it checks if the object share the same reference. Its default behaviour for value types (defined in the ValueType class) checks if all fields of both objects are the same. Check its definition below.

public virtual bool Equals (object obj);

Unlike the == operator, it is not static and it only takes 1 parameter of type object which representes the object to check equality against. Also notice that unlike the == operator, the argument is of type object, and not of the same type as we are implementing Equals for. Check the example below, where the function is implemented in the Coordinates class.

public class Coordinates
{

    private int _x;
    private int _y;
    
    public override bool Equals(object obj)
    {
        if (ReferenceEquals(obj, null))
            return false;
        if (obj is Coordinates c)
            return c._x == _x && c._y == _y;
        return false;
    }
}

In addition to checking the parameter for a null reference, it is necessary to cast it into Coordinates before actually checking for equality. It is also worth noting that the == operator can check both parameters for null values, while Equals only checks its only parameter. If the object we are calling Equals on is null, a NullReferenceException will be thrown.

If you want to dive deeper into C#’s equality tools, you might want to check this article out.

Equality in Unity

Out of the three main equality tools C# provides (ReferenceEquals, Equals and ==), only the == operator requires special attention – the other two behave exactly like they do in vanilla C#.

Unity provides a custom implementation of the == operator (and naturally for != as well) for types that inherit from the UnityEngine.Object class (e.g. MonoBehaviour and ScriptableObject). For other types – like a custom class that doesn’t inherit from any other class – C#’s standard implementation will be used. When comparing a UnityEngine.Object against null, the engine not only checks if the operand it null by itself, but it also checks if its underlying entity was destroyed. For example, observe the following sequence of actions:

Assuming we have a MonoBehaviour called ExampleBehaviour, create a new GameObject and attach an instance to it:

var obj = new GameObject("MyGameObject");
var example = obj.AddComponent();

Later on the game, we decide to destroy the ExampleBehaviour‘s instance:

Destroy(example);

And later on, we check the ExampleBehaviour instance for equality against null:

Debug.Log(example == null);

The debug statement above will print “true“. At first, that might seem obvious because we just destroyed that instance, but as I explained on my previous article, the instance’s reference is not null and it was not garbage-collected yet. In fact, it won’t be garbage-collected until the scope it has been defined still exists. What Unity’s custom == operator does in this scenario is to check if the underlying entity has been destroyed, which in this case is true. This behaviour helps programmers identifying objects that have been destroyed but still hold a valid reference.

Other similar operators

A few C# operators have implicit null checks. They are worth investigating here because they behave inconsistently with the == operator.

The null-conditional operators ?. and ?[]

These operators were planned as shortcuts for safe member and element access, respectively. The portion of code following the ?. or ?[] will only be executed if the object they been invoked on is not null. In standard C#, they are the equivalent of executing a similar call wrapped in a null check. For example, the following code, assuming that _dog is not instance of UnityEngine.Object:

if (_dog != null)
    _dog.Bark();

Can be replaced with:

_dog?.Bark();

Although these two code snippets might behave exactly the same in vanilla C#, they behave differently in Unity if _dog is an instance of UnityEngine.Object. Unlike ==, the engine does not have custom implementation for these operators. As a consequence, the first code snippet would check for underlying object destruction whereas the second code snippet would not. If you use the Rider IDE, the warning “Possible unintended bypass of lifetime check of underlying Unity engine object” will be displayed whenever one of these operators are used on a object of a class that inherits from UnityEngine.Object.

The null-coalescing operators ?? and ??=

The ?? operator checks if its left operand is null. If it not is, it returns its left operand. If it is, it returns its right operand. In the example below, assuming that Animal is a class that does not inherit from UnityEngine.Object, a3 will point to a2 because the left operand of ?? (a1) is null.

Animal a1 = null;
Animal a2 = new Animal();
Animal a3 = a1 ?? a2;

It is equivalent to

if (a1 == null)
    a3 = a2;
else
    a3 = a1;

The ??= is an assignment operator that assigns its right operand to its left operand only if its left operand is null. In the example below, a1 will be assigned to a3 only if a1 is null.

Animal a1 = ...
a1 ??= a3;

It is equivalent to

Animal a1 = ...
if (a1 == null)
    a1 = a3

Just like the null-conditional operators, there are no custom implementations of these operators for UnityEngine.Object. As a consequence, if the Animal class from the code snippets above inherited from MonoBehaviour, for example, the implicit null checks would not behave like the null checks using the == operator. Thus, their respective “equivalent” code would not be equivalent anymore. Again, a warning will be displayed in the Rider IDE when using these operands on objects that inherit from UnityEngine.Object.

Wrapping up

Equality operators and functions are basic language constructs present in every C# programmer’s toolset. When developing in standard C#, a programmer should keep in mind how some of these constructs behave differently for value and reference types. When programming in C# for Unity, a developer must also keep in mind how the engine tailored the language’s == and != operators to its ecosystem. In addition, one must keep in mind that some shortcut operators that perform implicit null checks behave inconsistently with the engine’s == operator. With that in mind, a developer should master all these equality tools in order to avoid undesired behaviour. Finally, some IDEs like Rider will warn the programmer about possible pitfalls regarding these operators.

That’s it for today. As always, feel free to leave a comment with questions, corrections, criticism or anything else that you want to add. See you next time!

Source

Passion for Coding: .NET == and .Equals()
Unity Blog: Custom == operator, should we keep it?
Rider: Avoid null comparisons against UnityEngine.Object subclasses
Rider: Possible unintended bypass of lifetime check of underlying Unity engine object

Unity’s Scripting Duality and Object Destruction

The Unity engine provides users with tools and abstractions that ease its usage and hide its complexity. Although we often take these commodities for granted and completely forget they exist, we often face a situation in which they become apparent, usually due to an unexpected behaviour. In this article, I discuss how an example of such an abstraction tool — Unity’s scripting solution — can accidentally expose the engine’s underlying mechanisms.

The duality: managed vs. native

As we all know, Unity’s programming language of choice is C#, but we need to keep in mind that the engine code itself isn’t written in C#, but in C/C++. Consequently, every code piece that invokes engine code (e.g. transform, GetComponent, gameObject.SetActive) does not run directly on the C# side, but on the native C++ side instead. An object of a type that inherits from UnityEngine.Object (like a MonoBehaviour) has two counterparts that live in different worlds: a managed object in C# world and an object in the native engine world. These two entities are linked to each other — the managed entity holds a pointer to the native entity — but are not, in fact, the same thing. Calls to the managed entity (often referred as a “wrapper”) will be transferred to the native entity whenever necessary: when a call to engine code is invoked. On the opposite side, wrapper calls that do not invoke engine code will not send to the native entity and will execute locally.

This might seem an unnecessary deep dive into the engine, but it brings some unexpected consequences. Take the example below. A Dog is a MonoBehaviour that does two simple things: Bark and Move. The first is a pure C# method that does not invoke engine code whereas the second moves the dog’s GameObject, invoking native engine code.

public class Dog : MonoBehaviour
{
    public void Bark()
    {
        Debug.Log("Woof!");
    }

    public void Move()
    {
        transform.position += new Vector3(1f,1f,1f);
    }
}

Now let’s test our Dog script with the help of the DogExample script below. This script creates a dog on Awake and can perform three actions based on user input: destroy the dog, make it bark and move. Simple as that.

public class DogExample : MonoBehaviour
{
    private Dog _dog;

    private void Awake()
    {
        var dogGameObject = new GameObject("Dog");
        _dog = dogGameObject.AddComponent();
    }

    private void Update()
    {
        if (Input.GetKeyDown(KeyCode.D))
            Destroy(_dog);
        if (Input.GetKeyDown(KeyCode.B))
            _dog.Bark();
        if (Input.GetKeyDown(KeyCode.M))
            _dog.Move();    
    }
}

Following, let’s test it, creating a new empty scene with the above script attached to a GameObject. Once we play the game, a new GameObject named “Dog” is created, with a Dog script added to it. If we hit the B key, the dog barks (in Unity’s console). If we hit the M key, it moves (we can see its position in the inspector). Then, we hit the D key to destroy that Dog instance. In the editor hierarchy, we can still see the “Dog” object, but there is no Dog script attached to it. If we try to make the dog move, a MissingReferenceException is thrown, stating that “The object of type ‘Dog’ has been destroyed but you are still trying to access it”. The exception makes sense because we’ve just destroyed the Dog instance. The same thing happens if we decide to destroy the dog.gameObject instead (but then the “Dog” game object would be removed from the hierarchy).

However, if we press the B key to make the dog bark, no errors are thrown and the “Woof!” message is displayed on the console. What just happened? The Dog script was just destroyed and an error was thrown proving it. How can the dog still bark?

It is here where the managed vs. native duality explained in the previous session becomes useful. The two entities are different: a Dog script lives in the managed C# world and its underlying components (i.e. its GameObject and Transform) live in the native, C++ engine world. They are somewhat connected, but they are not the same thing. When the wrapper dog was destroyed, its script instance was removed from the “Dog” game object and its native components were wiped out, but the managed Dog script wasn’t, and it will only be destroyed when it gets garbage-collected. As long as we keep a reference to that Dog instance, it will live. When we invoke the dog’s Bark method, we are simply invoking a regular C# method of an entity that lives in the managed world. As long as it doesn’t try to access any of its native world entities that just got destroyed, that call method should execute successfully.

This scenario changes if we try to access any of its underlying native entities, like its gameObject property, as we do in the Move method. In that case, the engine throws an error to let us know that the object of type Dog we are trying to access was destroyed. Note that the exception is a MissingReferenceException, an exception defined in the UnityEngine namespace. It is not a NullReferenceException and thus, it is not a C# built-in exception. Again, that sounds too deep of a dive into the engine details, but this subtle detail underscores the fact that the managed entity (the Dog instance) was not destroyed at all, otherwise an NullReferenceException exception would have been thrown. In fact, the wrapper object still lives. The call to Destroy will only destroy native, engine components. In other words, the lifetime of native entities will be determined by a call to Destroy (or a non-additive scene load) whereas the lifetime of managed entities will be determined by the garbage collector. Thus, the two entities have different life cycles, and sometimes we need to keep that in mind.

For a deeper look into how the Unity engine works under the hood, check this article out.

The potential problem

We just went through why user-defined MonoBehaviours still live in the managed world even after their corresponding native entity was destroyed. Also, we discussed how we can still invoke some of their methods successfully, as long as they don’t try to access their native engine components. This fact introduces two important questions: how do we easily identify that these managed entities should not be accessed anymore because their underlying entities were destroyed? Is it a good practice to still access a MonoBehaviour after is has been destroyed?

Let’s start answering the second question: no, it is not a good idea to access a MonoBehaviour after its native entities were destroyed. One might think that it’s safe to keep some of its methods free from accesses to native, engine code, but this practice hurts code maintainability badly. It introduces an unspoken (or undocumented) rule that some methods should not try to access some specific components. Consequently, some developer — unaware of this weird rule — might change the code down the development process, which could potentially introduce errors. Additionally, it goes against the idea of using a MonoBehaviour, which is to attach scripts to GameObjects so they can interact. If you want to use a objects that does not necessarily relate to a GameObjects and its life cycle, do not inherit from MonoBehaviour and use a vanilla C# class instead — or maybe you want to use a ScriptableObject.

The solution

The first question remains open: how do we easily identify that these managed entities should not be accessed anymore? The answer comes from Unity. We can easily check if the underlying entities of a MonoBehaviour were destroyed in two ways:

  • Check for equality against null:
     
    if (_dog == null)
        _dog.Move();
    
  • Check it as a boolean expression:
     if (_dog == true)
        _dog.Move();
    

    or simply

     if (_dog)
        _dog.Move();
    
  • The UnityEngine.Object class (which MonoBehaviour inherits from) implements custom equality operators that check if the underlying entities were destroyed. This operation is more complex than simply checking if the object reference is null because it invokes native code to check if the underlying entity was destroyed. As a consequence, it is also less performant than a vanilla C# null comparison. That’s why the Rider IDE displays a warning (“Comparison to ‘null’ is expensive”) whenever this null check is performed in a performance critical context. This custom implementation was reconsidered a while ago by Unity developers, but it was kept and still exists. This tool gives us a way to safely and easily check the lifetime of a MonoBehaviour‘s underlying object, which was exactly what we were looking for. But there is one thing to keep in mind…

    One little trap

    Even though we can use Unity’s custom equality operators to check whether a MonoBehaviour has been destroyed, we need to be careful about when we perform this check. As it happens, Unity does not destroy an object exactly when Destroy in invoked. Instead, the given object is tagged for destruction, which will only happen after the end of the current Update loop, but before rendering. At that point, all objects that were tagged for destruction will be actually destroyed. As a consequence, a check for destruction invoked right after (within the same Update loop) the Destroy call will return false. For example:

     
    private void DestructionTest()
    {
        Destroy(_dog);
        Debug.Log($"Is the dog null/destroyed? {_dog == null}");
    }
    

    The method above, when executed outputs “Is the dog null/destroyed? False”. If we run the same check one frame later, or even inside LateUpdate, the check returns true. That happens because the call to Destroy doesn’t actually destroy the object, it only tags it for later destruction. After the Update loop, the engine gathers all objects tagged for destruction and actually destroys them, one by one. As a consequence, we need to keep the delayed destruction in mind when checking for existing entities.

    Not all C#’s equality operators implement this custom behaviour. For a deeper look into Unity’s equality operators, check this article on the subject.

    Conclusion

    Unity does a great job at hiding implementation details and at abstracting away the complexity of its native side by providing C# wrappers for developers. Although this abstraction layer can be often ignored, there are some nuances we should keep in mind, like the different lifetimes of managed and native entities. But once we understand what is going on behind the scenes, it becomes clear that some unexpected behaviours are just consequences of Unity’s scripting duality.

    In this article, we learned how managed entities that inherit from UnityEngine.Object have a native counterpart and how their lifetime differ. We also learned how to use Unity’s custom implementation of equality operators to safely check for object destruction. Finally, we learned how the engine’s strategy to handle destruction can interfere on the lifetime check we just mentioned.

    That’s it for today. As always, feel free to leave a comment with questions, corrections, criticism or anything else that you want to add. See you next time!

    Source

    Unity Blog: Custom == operator, should we keep it?
    Rider: Avoid null comparisons against UnityEngine.Object subclasses
    Rider: Possible unintended bypass of lifetime check of underlying Unity engine object