Original JavaScript

[1]
Original JavaScript
Parallel-execute is not part of standard Scheme, but it can be implemented in MIT Scheme. In our implementation, the new concurrent processes also run concurrently with the original Scheme process. Also, in our implementation, the value returned by parallel-execute is a special control object that can be used to halt the newly created processes. The function concurrent_execute is not part of the JavaScript standard, but the examples in this section can be implemented in ECMAScript 2020.
[2] We have simplified exchange by exploiting the fact that our deposit message accepts negative amounts. (This is a serious bug in our banking system!)
[3] If the account balances start out as $10, $20, and $30, then after any number of concurrent exchanges, the balances should still be $10, $20, and $30 in some order. Serializing the deposits to individual accounts is not sufficient to guarantee this. See exercise 3.43.
[4] Exercise 3.45 investigates why deposits and withdrawals are no longer automatically serialized by the account.
[5] The term mutex is an abbreviation for mutual exclusion. The general problem of arranging a mechanism that permits concurrent processes threads to safely share resources is called the mutual exclusion problem. Our mutex is a simple variant of the semaphore mechanism (see exercise 3.47), which was introduced in the THE Multiprogramming System developed at the Technological University of Eindhoven and named for the university's initials in Dutch (Dijkstra 1968a). The acquire and release operations were originally called P and V, from the Dutch words passeren (to pass) and vrijgeven (to release), in reference to the semaphores used on railroad systems. Dijkstra's classic exposition (1968b) was one of the first to clearly present the issues of concurrency control, and showed how to use semaphores to handle a variety of concurrency problems.
[6] In most time-shared operating systems, processes threads that are blocked by a mutex do not waste time busy-waiting as above. Instead, the system schedules another process thread to run while the first is waiting, and the blocked process thread is awakened when the mutex becomes available.
[7] In MIT Scheme for a single processor, which uses a time-slicing model, test-and-set! can be implemented as follows: (define (test-and-set! cell) (without-interrupts (lambda () (if (car cell) true (begin (set-car! cell true) false))))) Without-interrupts disables time-slicing interrupts while its procedure argument is being executed.
[8] There are many variants of such instructions—including test-and-set, test-and-clear, swap, compare-and-exchange, load-reserve, and store-conditional—whose design must be carefully matched to the machine's processor–memory interface. One issue that arises here is to determine what happens if two processes threads attempt to acquire the same resource at exactly the same time by using such an instruction. This requires some mechanism for making a decision about which process thread gets control. Such a mechanism is called an arbiter. Arbiters usually boil down to some sort of hardware device. Unfortunately, it is possible to prove that one cannot physically construct a fair arbiter that works 100% of the time unless one allows the arbiter an arbitrarily long time to make its decision. The fundamental phenomenon here was originally observed by the fourteenth-century French philosopher Jean Buridan in his commentary on Aristotle's De caelo. Buridan argued that a perfectly rational dog placed between two equally attractive sources of food will starve to death, because it is incapable of deciding which to go to first.
[9] The general technique for avoiding deadlock by numbering the shared resources and acquiring them in order is due to Havender (1968). Situations where deadlock cannot be avoided require deadlock-recovery methods, which entail having processes threads back out of the deadlocked state and try again. Deadlock-recovery mechanisms are widely used in data-base-management systems, a topic that is treated in detail in Gray and Reuter 1993.
[10] One such alternative to serialization is called barrier synchronization. The programmer permits concurrent processes threads to execute as they please, but establishes certain synchronization points (barriers) through which no process thread can proceed until all the processes threads have reached the barrier. Modern Some processors provide machine instructions that permit programmers to establish synchronization points at places where consistency is required. The PowerPC$^{\textrm{TM}}$, for example, includes for this purpose two instructions called SYNC and EIEIO (Enforced In-order Execution of Input/Output).
[11] This may seem like a strange point of view, but there are systems that work this way. International charges to credit-card accounts, for example, are normally cleared on a per-country basis, and the charges made in different countries are periodically reconciled. Thus the account balance may be different in different countries.
[12] For distributed systems, this perspective was pursued by Lamport (1978), who showed how to use communication to establish global clocks that can be used to establish orderings on events in distributed systems.
3.4.2   Mechanisms for Controlling Concurrency