Concurrency in Programming
Languages
Matthew J. Sottile
Timothy G. Mattson
Craig E Rasmussen
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
1
Chapter 3 Objectives
• Introduce correctness issues that arise as a result
of concurrency.
• Discuss techniques for controlling concurrency to
prevent these issues from resulting in software
defects.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
2
Outline
• Correctness
• Control mechanisms
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
3
Correctness
• Concurrency introduces a family of correctness
problems that can arise due to interactions
between threads of execution that are:
– Executing at the same time.
– Interleaved in an unpredictable manner.
• Relative timing of the sequences of operations are
unpredictable, so bugs can be transient, or
manifest in ways that are very difficult to track
down and reproduce.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
4
Correctness
•
•
•
•
Race conditions
Deadlock and livelock
Liveness, fairness, and starvation
Nondeterminism
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
5
Race conditions
• Consider two sequences of operations that can
execute concurrently.
A1
A2
AX
A3
A4
B1
B2
BX
B3
B4
• A race condition exists if the result of the
sequences depends on their relative arrival at
some critical point in the sequence.
– Indicated here as AX and BX.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
6
Race conditions
• If the result of operations after the critical point in
the sequence (AX or BX) is dependent on which
executes first (AX then BX versus BX then AX), a
race is present in the code.
– The result is dependent on which thread of execution
“wins” the race to that point.
• Code without a race means that either:
– The relative ordering doesn’t matter.
– Concurrency control constructs are used to enforce the
required ordering.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
7
Race conditions
• Very difficult to debug in some cases, as races are
dependent on timing.
– OS upgrade may change performance of underlying
system primitives and change relative ordering, and
races may be revealed by code breaking.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
8
Deadlock
• Deadlock results from a cyclical chain of
dependencies.
– A depends on B, B depends on C, and C depends on A.
• Often results from a misuse of concurrency control
constructs.
– A holds lock that B wants, B holds lock that C wants, C
holds lock that A wants.
• Possible to observe when a program “freezes”,
and simply ceases to make progress.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
9
Deadlock
• Deadlock is similar to a race, in that a program
may execute correctly sometimes, but under
certain circumstances the relative ordering of
threads of execution will cause the program to
hang.
• Like races, this is due to unenforced (or improperly
enforced) dependencies within the program
between concurrent threads of execution.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
10
Livelock
• Similar to deadlock, but instead of freezing, a
program enters into an endless cycle of operations
that it cannot continue past.
– Example: Acquire lock 1, attempt to acquire lock 2 and
fail, release lock 1, retry.
• Livelock is detectable, but can be less obvious
from simple observation than deadlock due to the
fact that the program counter doesn’t freeze.
– It does enter a repeating cycle though.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
11
Liveness
• If multiple threads of execution are started, we
expect that they will all eventually finish.
• Often we will go further, and assume that during
any reasonable interval of time (such as 1
second), all concurrent threads will make some
progress.
• When a system or program ensures that this
property holds, we say is ensures liveness.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
12
Liveness
• Often liveness is a property of a scheduler, either
within the OS, thread library, or user application.
• Concurrent programmers that manage their own
threads must occasionally consider liveness of
their management algorithms.
• When a system doesn’t ensure liveness, then
starvation results.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
13
Starvation
• A thread of execution will starve if it ceases to
make progress in the presence of others.
– Example: a program that is continuously preempted by
others and never allowed to execute.
• Priority and aging schemes can be used to ensure
that starvation does not occur due to a scheduler.
• Queuing disciplines (such as FIFO) can guarantee
starvation will not occur due to synchronization
primitives.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
14
Nondeterminism
• Not always a problem – some algorithms can
exploit nondeterminism to their advantage.
– Example: concurrent queries to multiple search engines,
return whichever result returns first.
• Nondeterminism is an inherent property of parallel
systems, and must be kept in mind at all times.
– Both for positive and negative reasons.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
15
Outline
• Correctness
• Control mechanisms
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
16
Control
• To prevent correctness problems, we must control
how concurrently executing programs interact.
• Synchronization
– Semaphores, locks, monitors
– Mutual exclusion
• Transactions
– Speculative execution and conflict resolution
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
17
Synchronization
• Synchronization allows concurrent threads of
execution to communicate and coordinate
information about their relative state.
• This can be used to coordinate their execution and
prevent correctness problems.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
18
Semaphores
• Semaphores are one of the original
synchronization primitives due to Dijkstra.
• Represented as:
– A single integer.
– A queue of blocked accessors.
• Two operations are used to manipulate
semaphores.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
19
Semaphores
• Consider a semaphore s.
– Initialized to 1 with an empty queue.
• P(s): If s>0 then decrement s. Otherwise, block
the thread that attempted to perform P(s).
• V(s): If another thread is blocked on a call to P(s),
unblock it. Otherwise, increment s.
• Increment/decrement apply to the integer.
• Queue used to manage set of blocked threads.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
20
Semaphores
• Example:
– Before entering critical section, execute P(s).
• If no other thread is in section, s will have value 1, so it will be
decremented and thread can enter section.
• Otherwise, thread blocks.
– When exiting critical section, execute V(s).
• If other threads are blocked on s, unblock one of them that are
in the queue waiting.
– That thread can now enter the critical section.
• If no other threads are waiting, s is incremented to take on value
1.
– This will allow another thread to enter without blocking at some
point in the future.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
21
Semaphores
• We have described a binary semaphore so far.
– Values can be 0 or 1.
• We can allow the semaphore to range over other
values.
– E.g.: Initialize to 2. This will allow at most two threads
into a region protected by the semaphore.
– These are called counting semaphores when more
values than 1 and 0 are legal.
• Semaphores are a general structure.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
22
Locks
• A lock is a familiar synchronization construct.
– Consider locks on doors that have only one key.
– Someone obtains key, is able to unlock (“acquire”) the
lock.
– Others must wait for the key to be released before they
can unlock the lock themselves.
• Locks can be implemented using semaphores.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
23
Monitors
• Introduced by P. Brinch-Hansen.
• An early object-oriented approach to
synchronization.
– Encapsulate synchronization structures along with the
data they are to protect and the routines that will
operate on this data.
• Tames the complexity of synchronization within
complex codes.
– Avoid scattering synchronization logic all over a large
code.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
24
Monitors
• Example: Bank account object
– Data: Balance of account.
– Synchronization structure: Lock or semaphore.
– Operations: Withdraw, deposit.
• Data and synchronization structures are “private”
to the monitor.
– Access must go through routines provided by the
monitor.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
25
Monitors
• Important feature of monitors beyond
encapsulation is introduction of condition
variables.
• Condition variables allow threads to signal each
other.
– Beyond simple lock semantics.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
26
Monitors
• Condition variable signaling
– A thread may be executing within a monitor routine and
reach a state where it must block and yield the monitor
to allow another thread to access it so it can later pick
up where it left off and continue.
– Two operations are provided for condition variables.
• Wait()
• Signal()
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
27
Monitors
• A thread may wait on a condition variable.
– Yield the synchronization structures for the monitor to
allow other threads in.
– Block waiting for another thread to signal it.
• Another thread may later signal threads blocked
and waiting on a condition variable.
– Signaled threads will wake up and attempt to reacquire
the synchronization structures of the monitor.
– When they succeed, they pick up immediately after the
point where they wait()-ed.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
28
Monitors and Java
• Java objects provide monitor-like semantics.
• synchronized methods require acquisition of a
lock associated with object instance providing the
method.
– Java automatically provides the lock.
• wait() and notify() provide condition variable
operations.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
29
Transactions
• Locks, semaphores, and monitors provide a
pessimistic view of synchronization.
– Assume a conflict will occur, so restrict access to critical
sections.
• Transactions take an optimistic view.
– Assume no conflict will occur, and check after critical
operations to make sure no conflict occurred during
execution.
– If a conflict is detected, provide a mechanism to undo
operations that conflicted.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
30
Transactions
• Transactions are sequences of operations with a
start and an end.
• When the sequence ends, the transaction
software layer checks to make sure no other
thread of execution conflicted with the transaction.
– If no conflict occurred, the transaction is declared
successful and the results of the operation are
“committed”.
– If not, a set of operations is executed to roll-back the
transaction.
• Go back to a state where we can pretend the transaction
operations never occurred.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
31
Transactions: ACID
• Transactions come from the databases world.
• They define requirements of a transaction as:
– Atomicity: Transactions represent sequences of
operations intended to be atomic.
– Consistency: A system that starts in a consistent state
will remain in a consistent state after the transaction.
– Isolation: Intermediate states during the execution of a
transaction will not be visible to other threads.
– Durability: Once a transaction completes, the initiator of
the transaction will guarantee that the result will persist.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
32
Transactions
• The ACID requirements do not apply to all
transactional systems.
– Example: software transactional memory
• Durability makes sense for a database that is
backed by tape or other robust storage.
– Makes little sense though for systems that work only in
volatile memory. STM systems do not guarantee
survival through power outages.
– General transactional systems relax this constraint for
practical reasons.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen
33
Descargar

Document