Computer Deadlock – What it is, How to Prevent it

In Computer Software by Contributor

This is a paper written by Marko Bekric, for South East European University. It was submitted to Electronics Fanboy by the author to be posted.

______________________________

TABLE OF CONTENTS

 

Abstract——————————————————————————-1

Introduction————————————————————————–1

General about deadlocks———————————————————–1

Avoidance—————————————————————————-3

Avoidance Algorithms————————————————————-5

Resource Trajectories—————————————————————6

Safe and unsafe states————————————————————–7

The Banker’s Algorithm for a single resource———————————8

The Banker’s Algorithm for a multiple resource——————————9

Conclusion————————————————————————–10

References—————————————————————————11

Abstract

Deadlock is when each process in the set is waiting for an event that only another process in the set can cause. Actually it is a situation where in two or more competing actions each action is waiting for the other to finish, and thus neither ever does. Deadlock happens as a common problem with the multiprocessing.

1.  INTRODUCTION

According to Andrew S. Tanenbaum there are two kinds of resources: Preemtable and Nonpreemtable. The preemtable resource could be taken off the own process without unwanted effects. An example for preemtable resource is the memory. A nonpreemtable resource is the opposite of the preemtable, it could not be taken of its own process without unwanted effects. An example for nonpreemtable resource is the DVD-ROM, if the device is taken off by another process, than it would become useless. Logically the deadlock happens with nonpreemtable resources and to avoid them, processes must share the resources. If we order the schedule for the using of an resource, this is how it looks like:
1. Looking for a resource
2. Using the resource
3. Making the resource free
Tanenbaum defines this: “If the resource is not available when needed, than the process is forced to wait. The process whose access for the resource is denied shall sit in a close loop looking for resources, than it will wake up and tries again”.

1.2 General about deadlocks

There was a research done in 1971 by Edward G. Coffman which showed that there should be four conditions for a deadlock to occur:

1. Processes must claim exclusive control of the resources they require. At least one resource must be non-sharable, meaning that if another process requests an occupied resource, the requesting process is made to wait until the resource is free.
2. Processes hold resource units while waiting to acquire others held by other processes.
3. Resources can be released only by the processes holding them, i.e., they cannot be preempted. A resource can only be released if the holding process voluntarily wishes its freedom.
4. Condition for circle waiting. It needs to contain circle chain of two or more processes, so every process waits for the resource which is held by the next part of the chain.

All of the above should happen in order for a deadlock to occur. If one of the conditions is missing, than deadlock won’t happen. We are going to talk how to avoid deadlocks.

Here is an example of a deadlock:

1

  • Task T1 has a lock on resource R1 (indicated by the arrow from R1 to T1) and has requested a lock on resource R2 (indicated by the arrow from T1 to R2).

 

  • Task T2 has a lock on resource R2 (indicated by the arrow from R2 to T2) and has requested a lock on resource R1 (indicated by the arrow from T2 to R1).

 

  • Because neither task can continue until a resource is available and neither resource can be released until a task continues, a deadlock state exists.

 

We are going to talk how to avoid this kind of situation called deadlock.

2. AVOIDANCE

When a deadlock is detected we assume that when a process asks for resources, it asks for them all at once. In most systems, however, resources are requested one at a time. The problem is that the system must be able to decide whether granting a resource is safe or not and only make the allocation when it is safe. And there is an algorithm that can always avoid deadlock by making the right choice all the time, but only if certain information is available in advance.

 

Having seen that deadlock avoidance is essentially impossible, because it requires information about future requests, which is not known, how do real systems avoid deadlock? The answer is to go back to the four conditions stated by Coffman et al. (1971) to see if they can provide a clue. If we can ensure that at least one of these conditions is never satisfied, then deadlocks will be structurally impossible (Havender, 1968).

 

Attacking the Mutual Exclusion Condition

First let us attack the mutual exclusion condition. If no resource were ever assigned exclusively to a single process, we would never have deadlocks. However, it is equally clear that allowing two processes to write on the printer at the same time will lead to chaos. By spooling printer output, several processes can generate output at the same time. In this model, the only process that actually requests the physical printer is the printer daemon. Since the daemon never requests any other resources, we can eliminate deadlock for the printer.

 

Unfortunately, not all devices can be spooled (the process table does not lend itself well to being spooled). Furthermore, competition for disk space for spooling can itself lead to deadlock. What would happen if two processes each filled up half of the available spooling space with output and neither was finished producing output? If the daemon was programmed to begin printing even before all the output was spooled, the printer might lie idle if an output process decided to wait several hours after the first burst of output. For this reason, daemons are normally programmed to print only after the complete output file is available. In this case we have two processes that have each finished part, but not all, of their output, and cannot continue. Neither process will ever finish, so we have a deadlock on the disk.

 

Nevertheless, there is a germ of an idea here that is frequently applicable. Avoid assigning a resource when that is not absolutely necessary, and try to make sure that as few processes as possible may actually claim the resource.

 

 

 

Attacking the Hold and Wait Condition

The second of the conditions stated by Coffman et al. looks slightly more promising. If we can prevent processes that hold resources from waiting for more resources, we can eliminate deadlocks. One way to achieve this goal is to require all processes to request all their resources before starting execution. If everything is available, the process will be allocated whatever it needs and can run to completion. If one or more resources are busy, nothing will be allocated and the process would just wait.

 

An immediate problem with this approach is that many processes do not know how many resources they will need until they have started running. In fact, if they knew, the banker’s algorithm could be used. Another problem is that resources will not be used optimally with this approach. Take, as an example, a process that reads data from an input tape, analyzes it for an hour, and then writes an output tape as well as plotting the results. If all resources must be requested in advance, the process will tie up the output tape drive and the plotter for an hour.

 

Nevertheless, some mainframe batch systems require the user to list all the resources on the first line of each job. The system then acquires all resources immediately and keeps them until the job finishes. While this method puts a burden on the programmer and wastes resources, it does prevent deadlocks.

 

A slightly different way to break the hold-and-wait condition is to require a process requesting a resource to first temporarily release all the resources it currently holds. Then it tries to get everything it needs all at once.

 

Attacking the No Preemption Condition

Attacking the third condition (no preemption) is even less promising than attacking the second one. If a process has been assigned the printer and is in the middle of printing its output, forcibly taking away the printer because a needed plotter is not available is tricky at best and impossible at worst.

 

Attacking the Circular Wait Condition

Only one condition is left. The circular wait can be eliminated in several ways. One way is simply to have a rule saying that a process is entitled only to a single resource at any moment. If it needs a second one, it must release the first one. For a process that needs to copy a huge file from a tape to a printer, this restriction is unacceptable.

 

Another way to avoid the circular wait is to provide a global numbering of all the resources. Now the rule is this: processes can request resources whenever they want to, but all requests must be made in numerical order. A process may request first a printer and then a tape drive, but it may not request first a plotter and then a printer.

 

2.1 Avoidance Algorithms

The basic approach to Avoiding (Dodging) Deadlock in a principled manner consists of:

 

    1.  Constructing the Needs-Holds Graph-With One vertex for each Process and one for each Type of Resource – with a number assigned to each Resource Type giving the number of that type.
    2.  Looking for a Cycle in that Graph.

 

Algorithm I

 

Simply Detects the existence of a Cycle:

A) Start at any vertex find all its immediate neighbors.

B) From each of these find all immediate neighbors, etc.

C) Until a vertex repeats (there is a cycle) or one cannot continue (there is no cycle).

Algorithm I is only conclusive if there is only one Resources of each type, [Unique Resources]. If there is more than one Resource of a given type [Equivalent Resources], all of which will satisfy a Need equally well, then finding a cycle is not sufficient to show Deadlock. In this case an approach in which Processes which are not involved in cycles and their Holds and Needs are systematically removed does work.

 

Algorithm II

 

On a copy of the graph:

A) See if any Processes needs can all be satisfied.

B) If so satisfy the needs with holds and remove that Process and all the Resources it holds from the graph.

C) If any Process are left Repeat step a. If all Processes are finally removed by this procedure there is no Deadlock in the original graph, if not there is. If one has identified the cause of the deadlock one can Recover by removing holds from a Process which cannot, in any case, continue and give it to another Process (Preemption) which may thereby complete and release its held resources.

These algorithms can be extended from simply testing for deadlock to Avoidance (Dodging) of Deadlock. Before satisfying any NEED or HOLD one could use either algorithm above to determine whether that addition will cause Deadlock. If so the request is refused. [2]

2.2Resource Trajectories

 

The algorithms for doing deadlock avoidance that we mentioned above are based on the concept of safe states.

2


Here we see a model for dealing with two processes and two resources, for example, a printer and a plotter. The horizontal axis represents the number of instructions executed by process A. The vertical axis represents the number of instructions executed by process B. At I1 A requests a printer; at I2 it needs a plotter. The printer and plotter are released at I3 and I4, respectively. Process B needs the plotter from I5 to I7 and the printer from I6 to I8.

Every point in the diagram represents a joint state of the two processes. Initially, the state is at p, with neither process having executed any instructions. If the scheduler chooses to run A first, we get to the point q, in which A has executed some number of instructions, but B has executed none. At point q the trajectory becomes vertical, indicating that the scheduler has chosen to run B. With a single processor, all paths must be horizontal or vertical, never diagonal. Furthermore, motion is always to the north or east, never to the south or west (processes cannot run backward).

When A crosses the I1 line on the path from r to s, it requests and is granted the printer. When B reaches point t, it requests the plotter.

You can see that there are regions that are shaded. The region with lines going from southwest to northeast shows both processes having the printer. The mutual exclusion rule makes it impossible to enter this region. Similarly, the region shaded the other way represents both processes having the plotter, and is equally impossible.

If the system ever enters the box bounded by I1 and I2 on the sides and I5 and I6 top and bottom, it will eventually deadlock when it gets to the intersection of I2 and I6. At this point A is requesting the plotter and B is requesting the printer, and both are already assigned. The entire box is unsafe and must not be entered. At point t the only safe thing to do is run process A until it gels to I4. Beyond that, any trajectory to u will do.[3]

The important thing to see here is at point t B is requesting a resource. The system must decide whether to grant it or not. If the grant is made, the system will enter an unsafe region and eventually deadlock. To avoid the deadlock, B should be suspended until A has requested and released the plotter.

2.3 Safe and unsafe states

At any instant of time, there is a current state consisting of E, A, C, and R. A state is said to be safe if it is not deadlocked and there is some scheduling order in which every process can run to completion even if all of them suddenly request their maximum number of resources immediately. It is easiest to illustrate this concept by an example using one resource. In the picture bellow under (a) we have a state in which A has 3 instances of the resource but may need as many as 9 eventually. B currently has 2 and may need 4 altogether, later. Similarly, C also has 2 but may need an additional 5. A total of 10 instances of the resource exist, so with 7 resources already allocated, there are 3 still free.

3

The illustration under (a) shows us that the state is safe because there exists a sequence of allocations that allows all processes to complete. Namely, the scheduler could simply run B exclusively, until it asked for and got two more instances of the resource, leading to the state of (b). When B completes, we get the state of (c). Then the scheduler can run C leading eventually to (d). When C completes, we get (e). Now A can get the six instances of the resource it needs and also complete. Thus the state of (a) is safe because the system, by careful scheduling, can avoid deadlock. This could be a successful way to avoid deadlocks.

 

To conclude about the safe and unsafe states, we should mention that:

  • A state is safe if not deadlocked and there is a scheduling order in which all processes can complete, even if they all ask for all of their resources at once;
  • An unsafe state is not deadlocked, but no such scheduling order exists;
  • It may even work, if a process releases resources at the right time.

2.4 The Banker’s Algorithm for a single resource

A scheduling algorithm that can avoid deadlocks is invented by Dijkstra (1965) and is known as the “banker’s algorithm”[1] and is an extension of the deadlock detection algorithm. It is modeled on the way a small-town banker might deal with a group of customers to whom he has granted lines of credit. What the algorithm does is check to see if granting the request leads to an unsafe state. If it does, the request is denied. If granting the request leads to a safe state, it is carried out. In the picture bellow (a) we see four customers. A, B, C and D, each of whom has been granted a certain number of credit units (e.g., 1 unit is 1K dollars). The banker knows that not all customers will need their maximum credit immediately, so he has reserved only 10 units rather than 22 to service them. (In this analogy, customers are processes, units are, say, tape drives, and the banker is the operating system.) [1][3]

4

This picture illustrates the three resource allocation states

(a) Safe. (b) Safe (c) Unsafe.

 

2.5 The Banker’s Algorithm for a multiple resources

 

The Banker’s algorithm can also handle multiple resources. Here we have a picture that illustrates that situation.

5

In the picture illustrated above, we have two matrices. The one on the left shows how many of each resource are currently assigned to each of the five processes. The matrix on the right shows how many resources each process still needs in order to complete. As in the single resource case, processes must state their total resource needs before executing, so that the system can compute the right-hand matrix at each instant. The three vectors at the right of the figure show the existing resources, E, the possessed resources, P, and the available resources, A, respectively. From E we see that the system has six tape drives, three plotters, four printers, and two CD-ROM drives. Of these, five tape drives, three plotters, two printers, and two CD-ROM drives are currently assigned. This fact can be seen by adding up the four resource columns in the left-hand matrix. The available resource vector is simply the difference between what the system has and what is currently in use. The algorithm for checking to see if a state is safe can now be stated. Look for a row, R, whose unmet resource needs are all smaller than or equal to A. If no such row exists, the system will eventually deadlock since no process can run to completion. Assume the process of the row chosen requests all the resources it needs (which is guaranteed to be possible) and finishes. Mark that process as terminated and add all its resources to the A vector.  Repeat steps 1 and 2 until either all processes are marked terminated, in which case the initial state was safe, or until a deadlock occurs, in which case it was not. If several processes are eligible to be chosen in step 1, it does not matter which one is selected: the pool of available resources either gets larger, or at worst, stays the same.The banker’s algorithm was first published by Dijkstra in 1965. Since that time, nearly every book on operating systems has described it in detail. Innumerable papers have been written about various aspects of it. Unfortunately, few authors have had the audacity to point out that although in theory the algorithm is wonderful, in practice it is essentially useless because processes rarely know in advance what their maximum resource needs will be. In addition, the number of processes is not fixed, but dynamically varying as new users log in and out. Furthermore, resources that were thought to be available can suddenly vanish (tape drives can break). Thus in practice, few, if any, existing systems use the banker’s algorithm for avoiding deadlocks [1][3].

2.6 Conclusion

We have seen how deadlocks can be avoided if certain information about processes are available in advance of resource allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states. In order for the system to be able to figure out whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. One known algorithm that is used for deadlock avoidance is the Banker’s algorithm, which requires resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.

REFERENCES

[1]———–Modern Operating Systems 2Nd Ed by Tanenbaum

[2]———–OS Deadlocks by M C Paull © 2005

[3]————Operativni Sistemi by Bekim Fetaji & Majlinda Fetaji

Contemporary Sciences and Technologies Faculty