Category Archives: Operating Systems

Semaphores

A Semaphore is a variable type that provides good abstraction for the process of controlling the access to a resource by many processes in a multiprogramming environment. It can be understood as a simple variable that keeps account of the number of resources that are available, or to be more precise, how many units of a particular resource are available.The semaphores cannot be accessed directly. The kernel itself provides the protection for semaphores.The Operating textbook written by Silberschatz and Galvin beautifully explains what a semaphore is.

Semaphores are of two types :

a. Binary Semaphores : The only values possible are 0 and 1. They have the same functionality as that of mutexes.

b. Counting Semaphores : Any arbitrary number is possible. They are used to allow controlled access to a particular resource. If the count is greater than zero, there are resources available, which can be used by the processes. If the count is less than zero, then it means that no resources are available for use.

More specifically , a semaphore can be considered as a data type that has an integer value as well as a process list. The integer keeps track of the number of resources that are available whereas the list maintains a list of processes that have not been given the required resources. If a process has not been allocated the necessary resources, it is put in the waiting queue for that particular resource, maintained by the semaphore that deals with the controlled access of the resource under consideration.

The definition for a semaphore data type is as follows :

struct sem {
    int count;
    Queue process;
};

The two operations that are performed by the semaphores are :
1. Wait ()
2. Signal ()

The value of a semaphore cannot be changed in any other method than by calling wait and signal.

1. Wait() (also known as Operation P – Proberen)
2. Signal() (also known as Operation V – Verhogen)

The code for P operation is as follows :

void wait(sem s) {
    s.count--;
    if (s.count < 0) {
        /*Place the process in the waiting list  (s.queue)*/
        /*block this process */
    }
}

The code for V operation is as follows :

void signal(sem s) {
    s.count++;
    if (s.count <= 0) {
        /*remove the process from the P from the s.queue;
        /*place the process in the ready queue*/
    }
}

Functioning of a Semaphore :

The P operation sleeps till the resource guarded by he semaphore is available. The V operation increments the number of resources available when an operation is executed and the resources have been finished using. If suppose the value of a semaphore was initially 1. When a process requests for a resource, the wait() system call is performed on the semaphore and its value is decreased by one. The current value of the semaphore is zero. Now the next time that another process requests for the resource, it is is not available and thus the process is put into the list of processes maintained by the semaphore.
Now if suppose the first process has completed using the resource and is now ready to release it. In such a case, it calls a signal() on the semaphore which increments the value of the count. The resource now becomes available again for use by other processes. At this time, one among all those processes that were waiting for the resource is freed from the list of processed maintained by the process and it is placed in the ready queue.

Point to be Noted :
Under the classical definition of a semaphore the count value never becomes negative. That is because initially, busy waiting was the technique used to implement the semaphore. But after noticing that busy waiting only wasted a lot of CPU cycles that could be used by other processes efficiently, the concept of including a list of processes which are waiting for a resource to be allocated to them, was introduced.

Advertisements

Classical problems of Process Synchronisation in Operating Systems

Synchronization between processes is of great importance as far consistency of data is considered.

There are a few classical problems regarding synchronization, that have been commonly discussed over the past few years. These problems are very much similar to the problems that are faced in real by Operating Systems in making sure that every process has the required number of resources. Some of these problems are:

  • Producer Consumer Problem
  • Reader Writer Problem
    • giving preference to the readers
    • giving preference to the writers
  • Dining Philosophers’ Problem
  • Sleeping Barber Problem

Various possible solutions have been discussed for each problem, each solution of which can be worked out in different methods. Some of the different approaches are :

  • Using counter
  • Using Semaphores
  • Using Monitors

Each approach has got its own pros and cons.

The next post will be about Producer Consumer Problem and how to solve it using a counter and semaphores. Monitors will be explained later.

Some common terms related to Process Synchronisation

Process Synchronization is very important to an Operating System where data consistency is concerned. Some of the common terms related to Process Synchronization are as follows :

Critical Section : This is a part of the code which accesses shared resources and it is required that the critical section should not be executed when there are than one processes for the scheduler to choose from.

Atomic Operation : There may be some set of instructions that may have to be executed together. They are indivisible. No one can view the intermediate the status of the operations neither can anyone interrupt them.

Mutual Exclusion : When one process is accessing the critical section that edits a global data no other process should access a critical region that changes the shared resource.

Deadlock : A deadlock is a condition where one process is waiting for a resource to be released by another process which is itself waiting for another process to be released by the current process. Unlike other problems faced by the Operating Systems, this problem is permanent and there is no solution to it.

Race Condition : This is the condition in which multiple threads or processes can access and edit the data in the critical region and the value of the resource depends on the order of relative speed of execution of the threads.

Starvation : A starved process is one which is overlooked by the scheduler. Although it is ready to execute it is not provided with the resources that it requires, for various reasons.

Some of the problems faced during concurrency :

  • Sharing global resources : When two processed access the same variable depends upon the order of usage of the two processes. It depends upon the order in which they access the critical region.
  • Request for resources : When process requests for a particular resource and then enters the wait state, other processes will not be able to access that resource. This may even lead to deadlock.