On the surface, time seems straightforward. It is an ordering imposed on events.[1] For any events $A$ and $B$, either $A$ occurs before $B$, $A$ and $B$ are simultaneous, or $A$ occurs after $B$. For instance, returning to the bank account example, suppose that Peter withdraws $10 and Paul withdraws $25 from a joint account that initially contains $100, leaving $65 in the account. Depending on the order of the two withdrawals, the sequence of balances in the account is either $\$100 \rightarrow \$90 \rightarrow\$65$ or $\$100 \rightarrow \$75 \rightarrow\$65$. In a computer implementation of the banking system, this changing sequence of balances could be modeled by successive assignments to a variable balance.
In complex situations, however, such a view can be problematic. Suppose that Peter and Paul, and other people besides, are accessing the same bank account through a network of banking machines distributed all over the world. The actual sequence of balances in the account will depend critically on the detailed timing of the accesses and the details of the communication among the machines.
This indeterminacy in the order of events can pose serious problems in the design of concurrent systems. For instance, suppose that the withdrawals made by Peter and Paul are implemented as two separate processes threads sharing a common variable balance, each process thread specified by the procedure function given in section 3.1.1:
Original | JavaScript |
(define (withdraw amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds")) | function withdraw(amount) { if (balance >= amount) { balance = balance - amount; return balance; } else { return "Insufficient funds"; } } |
Things can be worse still. Consider the expression statement
Original | JavaScript |
(set! balance (- balance amount)) | balance = balance - amount; |
Original | JavaScript | |
The timing diagram in figure 3.52 figure 3.53 depicts an order of events where balance starts at 100, Peter withdraws 10, Paul withdraws 25, and yet the final value of balance is 75. As shown in the diagram, the reason for this anomaly is that Paul's assignment of 75 to balance is made under the assumption that the value of balance to be decremented is 100. That assumption, however, became invalid when Peter changed balance to 90. This is a catastrophic failure for the banking system, because the total amount of money in the system is not conserved. Before the transactions, the total amount of money was $100. Afterwards, Peter has $10, Paul has $25, and the bank has $75.[2]
The general phenomenon illustrated here is that several processes threads may share a common state variable. What makes this complicated is that more than one process thread may be trying to manipulate the shared state at the same time. For the bank account example, during each transaction, each customer should be able to act as if the other customers did not exist. When a customer changes customers change the balance in a way that depends on the balance, he they must be able to assume that, just before the moment of change, the balance is still what he they thought it was.
The above example typifies the subtle bugs that can creep into concurrent programs. The root of this complexity lies in the assignments to variables that are shared among the different processes threads. We already know that we must be careful in writing programs that use set!, assignment, because the results of a computation depend on the order in which the assignments occur.[3] With concurrent processes threads we must be especially careful about assignments, because we may not be able to control the order of the assignments made by the different processes threads. If several such changes might be made concurrently (as with two depositors accessing a joint account) we need some way to ensure that our system behaves correctly. For example, in the case of withdrawals from a joint bank account, we must ensure that money is conserved. To make concurrent programs behave correctly, we may have to place some restrictions on concurrent execution.
One possible restriction on concurrency would stipulate that no two operations that change any shared state variables can occur at the same time. This is an extremely stringent requirement. For distributed banking, it would require the system designer to ensure that only one transaction could proceed at a time. This would be both inefficient and overly conservative. Figure 3.54 shows Peter and Paul sharing a bank account, where Paul has a private account as well. The diagram illustrates two withdrawals from the shared account (one by Peter and one by Paul) and a deposit to Paul's private account.[4] The two withdrawals from the shared account must not be concurrent (since both access and update the same account), and Paul's deposit and withdrawal must not be concurrent (since both access and update the amount in Paul's wallet). But there should be no problem permitting Paul's deposit to his private account to proceed concurrently with Peter's withdrawal from the shared account.
A less stringent restriction on concurrency would ensure that a
concurrent system produces the same result as if the
processes
threads
had run sequentially in some order. There are two important aspects to this
requirement. First, it does not require the
processes
threads
to actually run sequentially, but only to produce results that are the same
as if they had run sequentially. For the example in
figure 3.54, the designer of the
bank account system can safely allow Paul's deposit and Peter's
withdrawal to happen concurrently, because the net result will be the same as
if the two operations had happened sequentially. Second, there may be more
than one possible correct
result produced by a concurrent
program, because we require only that the result be the same as for
some sequential order. For example, suppose that Peter and
Paul's joint account starts out with $100, and Peter deposits
$40 while Paul concurrently withdraws half the money in the account.
Then sequential execution could result in the account balance being either
$70 or $90 (see
exercise 3.38).[5]
There are still weaker requirements for correct execution of concurrent programs. A program for simulating diffusion (say, the flow of heat in an object) might consist of a large number of processes, threads, each one representing a small volume of space, that update their values concurrently. Each process thread repeatedly changes its value to the average of its own value and its neighbors' values. This algorithm converges to the right answer independent of the order in which the operations are done; there is no need for any restrictions on concurrent use of the shared values.
Original | JavaScript | |||||||||||||
|
|
Time is a device that was invented to keep everything from happening at once.
cached) among the different processors to increase the speed of memory access.