Thread safety is a term that is applied to a language's ability
to safely modify the same memory location concurrently in multiple
processes. A language that does not have thread safety cannot
guarantee program correctness. Synchronization is one of the most
common means of offering thread safety.
To understand thread safety, you must first understand
multitasking. A program running multiple tasks (called threads) at
once does not actually run all of the threads at the exact same
time. Instead, each task takes up a small sliver of the CPU's total
processing time, and when it reaches a stopping point, or when an
amount of time has elapsed, it is suspended, set aside, and the
next task gets a chance to run. This happens so quickly that users
have the appearance of many things running at once.
However, when two threads are trying to use the same memory
resource, bad things can happen if they are interrupted at the
wrong time. Consider these two threads:
Thread 1: Load x. Add 5 to x. Store this result in x.
Thread 2: Load x. Add 10 to x. Store this result in x.
Now, if thread 1 runs before or after thread 2, there is no
problem. However, if thread 1 is interrupted, you might see a
bug:
X = 0
Thread 1: Load X (0) into reg1 (0).
Thread 1: Add 5 to reg1 (5).
Thread 2: Load x (0) into reg2 (0).
Thread 2: Add 10 to reg2 (10).
Thread 2: Store reg2 (10) into x (0 -> 10).
Thread 1: Store reg1 (5) into x (10 -> 5).
In case you missed it, you'll see that if 10 and 5 are added to
x, the total should be 15. However, thread 2 preempted thread 1 at
a critical point, and so thread 2 was unaware that x should have
been 5, not 0. Since thread 1 had no idea that thread 2 was going
to preempt the processing at that time, it didn't know that X
changed to 10, and thus clobbered thread 2's work when it
resumed.
In order to fix this, you add a synchronization lock. This
prevents a thread from starting after a critical point until the
critical point has passed. It would look like this:
Thread 1: Lock x. Load x. Add 5 to x. Store x. Unlock x.
Thread 2: Lock x. Load x. Add 10 to x. Store x. Unlock x.
Now, the logic might look like this:
Thread 1: Lock x.
Thread 1: Load x.
Thread 1: Add 5 to x.
Thread 2: Lock x. (Cannot lock, so wait).
Thread 1: Store x.
Thread 2: Lock x.
Thread 2: Load x.
Thread 2: Add 10 to x.
Thread 2: Store x.
In this example, thread 2 was forced to wait on the other
thread. While overall processing speed is reduced, the system
successfully added the numbers in order without losing any results;
x would be 15 after those two threads were done.