OS: Note of Chapter 5
Concurrency Problems with Out-of-Order Execution
Concurrency is nowadays one of the most difficult aspects of system programming. The ad hoc techniques are vulnerable to subtle programming errors whose effects are difficult to reproduce and diagnose. Summarily errors about concurrency occurs at a very low probability, leading to difficulty of reproducing and debugging.
Concurrency
The central themes of OS design are all concerned with the management of processes and threads. Concurrency is not the same as parallelism. Concurrency focuses on simultaneous execution during a period, and parallelism focuses on simultaneous execution during a time point. In summary, concurrency is a logical concept, and parallelism is a physical concept.
The basic principle of concurrency is interleaving and overlapping of processes.
This chapter is focusing on uniprocessor concurrency.
For uniprocessor concurrency, the relative speed of execution of processes cannot be predicted. It depends on activities of other processes, the way that the OS handles interrupts, and the scheduling policies of the OS.
Example:
1
2
3
4
5
void echo() {
chin = getchar();
chout = chin;
putchar(chout);
}
Suppose that there’re 2 same processes with the same code above. The execution of the 2 processes may have passing unexpected results: the process 1 gives the value that process 2 inputs, etc.
This problem is caused by shared R/W memory access. The 2 processes both have the read and write access to a same memory address. The uncertainty of the interrupt makes the shared memory access unpredictable, and because of the code is too short, so in almost many circumstences there will not be any error. This causes that troubles about concurrency are difficult to reproduce and debug. The solution of this problem is using mutual exclusion.
Concurrency problems raises many issues, for which the OS must be able to:
- keep track of various processes
- allocate and deallocate resources for each active process, such as processor time, memory, files, and I/O devices
- protect the data and physical resources of each process against interference by other processes
- ensure that the processes and outputs are independent of the processing speed
Process Interaction
The processes can be unaware, indirectly aware, or directly aware the existence of another process, and the interaction between processes can be cooperative or competitive.
Degree of awareness | Interaction |
---|---|
Unaware | Independent, competitive |
Indirectly aware | Independent, cooperative by sharing data |
Directly aware | Cooperative by communication |
Mutual Exclusion
The implement of mutual exclusion is like entercritical()
and exitcritical()
, such as:
1
2
3
entercritical (Ra);
// critical section
exitcritical (Ra);
This is more similar to the cli
and sti
, which may lead to unbreakable infinite loop, and fall back to single process if there’s many cli
and sti
in the code.
Semaphore
Semaphore can be initialesed to a nonnegative integer value. The semaphore can be incremented or decremented by one. The semaphore primitives is defined as:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct semaphore {
int count;
queueType queue;
};
void semWait(semaphore s) {
--s.count;
if (s.count < 0) {
// add this process to s.queue
// block this process
}
}
void semSignal(semaphore s) {
++s.count;
if (s.count <= 0) {
// remove a process P from s.queue
// wakeup P
}
}
There’s also a binary semaphore, which is a semaphore that can only have the values 0 and 1. The definition is like:
1
Then we can use semaphore to protect data, when a process want to enter critical section, it calls semWait(lock)
, then lock.count
become 0, and the process is pushed back into the queue.
Suppose when the 1st process is occupying the critical section, the 2nd and 3rd processes comes. Then the lock.count
become -1 and -2 respectively. When the 1st process leaves the critical section, the lock.count
become -1, and the 2nd process is waken up and enters the critical section. When the 2nd process leaves the critical section, the lock.count
become 0, and the 3rd process is waken up and enters the critical section.
In this example, the 1st process entered the critical section when semaphore is positive, but the 2nd and 3rd processes entered the critical section when semaphore is nonpositive, this is because the 2nd and 3rd processes are already in the queue, like using a booked ticket.
In fact, the operating system have its primary objective to take good use of CPU, but when more machenisms are introduced, The OS is getting more and more complicated.
Like the semaphore for concurrency, there will also be a blocking queue for I/O, which is similar to them.
Common semaphore definition is like:
1
2
3
4
5
6
7
8
9
10
const int n = /* number of processes */;
semaphore s = 1;
void F(int i) {
while (true) {
semWait(s);
/* Critical section */
semSignal(s);
}
}
Buffer, Productor, and Consumer
There is many types of buffer in the computer science topic, The buffer is for storaging data for better access speed. The producer give data to the buffer and the consumer takes the data and use it. It’s important that when a producer is writing to a buffer, all other roles cannot access to it, a.k.a. mutual exclusion. Besides, when the buffer is full or empty, it cannot be written or read.
The former semaphores introduced cannot detect if the buffer is full or empty, so we need new semaphores, which is called syncronisation sepathore.
In this senario, the binary semaphore s
is for blocking access to the shared memory, n
is a global variable for current units in the buffer, and binary semaphore delay
is for blocking access for consumers that is accessing to an empty buffer.
First, consider the circumstences for infinite buffer
The primary version is:
1
In this circumstence, the if
comparison of n
will cause a problem when interrupted just before it. So, it’s OK to copy n
to avoid it, like:
1
Besides, we can give up binary semaphore, use common semaphores s
and n
, like:
1
Then, consider the realistic senario, the buffer is finite, just introduce a new semaphore for remaining capacity.
1