Subscribe to our RSS Feeds

Types of CPU schedulers

0 Comments »

Several types of schedulers may be used in operating systems there are three fundamental types of schedulers

·         Long term scheduler
·         Short term scheduler
·         Medium term scheduler

Long term scheduler

Long term scheduler is also known as job scheduler. It selects jobs or user programs from job poolon the disk and loads them into main memory. Once a job or user program is loaded into memory, it becomes the process and added to the ready queue.  The long term scheduler controls the number of process in memory. It means that it controls the degree of multiprogramming.

Short term scheduler

The short term scheduler is also known as dispatcher. It retrieves a process from the ready queue and CPU is allocated to it. The process state is changed from ready to running. If an interrupt or time out occurs the scheduler places the running process back into the ready queue and marks it as ready. Typically the dispatcher gives the control of CPU for a fixed amount of time. A Process may execute for only a few milliseconds, and then the next process is selected form ready queue for execution and so on. The time taken by the dispatcher to stop the execution of one process and then to start the execution of another process is known as the dispatcher latency.

Medium term scheduler

The medium term scheduling is also known as swapping. In this scheduling swapper is used to exchange process between main memory and secondary storage (disk). Typically multiprocessing system use swapping technique, Sometimes a process that  is in job queue on disk is too long and cannot be fit in memory, then swapper removes another process from ready queue to make room for it. Later the same process is re-loaded into memory, which is added to ready queue and its execution is continued where it left off if a process is waiting for completion of a short term I/O operation than it is not swapped out.
4:54 PM
3 Comments »

What is CPU Scheduling

The processes entered into the computer system are put into the job queue. Similarly, the processes that are in main memory and are ready to execute are kept in a ready queue. In a multiprogramming and timesharing systems, multiple processes is used to determine which process to run first and how much times.  The method or procedure for switching the CPU among multiple processors is called CPU scheduling or process scheduling. The part of operating system which schedules the process is called the CPU scheduler.
In single precursors system one process can run at a time and other processes have wait. In multiprocessing system some processes may be running in this cases schedules the execution of processes on the CPU.
In multiprogramming system some processes may be running at all times. Similarly timesharing system must switch from one process to another so frequently that users can interact with their programs. The CPU scheduler plays the main role in multiprogramming and timesharing system.
CPU scheduling increase the CPU utilization.  Whenever CPU becomes idle the CPU scheduler selects a process from the ready queue and sends it to the CPU for execution.
Preemptive and None preemptive scheduling:
The earlier computer systems used the non preemptive scheduling. It means that a process retained control of the CPU until the process blocked or terminated. This approach was used in batch systems. In modern computer systems preemptive scheduling is used. The scheduler may preemptive a process before it blocks or terminated in order to allocate the CPU to another process.
For example, the non preemptive scheduling was used by Microsoft windows 3.x product. The preemptive scheduling was introduced in Windows 95 operating system. To day all operating systems used this method. Non preemptive scheduling is used only on certain hardware platform.

7:41 PM
2 Comments »

What is Thread

A thread is a sequence of instruction within a process. A thread also represents the sequential flow of execution of the task of a process. A process may consist of one of many threads. Therefore a thread is sometimes called a lightweight process. Each thread has its own information of program counter, CPU register and stack. In a process, threads are typically used for dividing a task in a web browser are divided into multiple threads such as,
  • Task to download the images
  • Task to download the text
  • Task to display images and text etc
You have observed that downloading images take more time than text. In this case one thread remains busy for downloading images, while other threads download and display the text. It must be noted that each process occupies an independent address space in memory. But all threads of the process share the same address space. On the basis of process execution, the Operating System can be classified as,
1.      Single-Process single-thread
2.      Single-Process multiple-threads
3.      Multi-Process single-thread
4.      Multi-Process multi-threads
The simplest arrangement is single-process single-thread. Where only one task can be perform at a time. MS DOS is an example of a single-process single-thread operating system. For example in a word processor program only one process can execute at a time. The user cannot simultaneously type text and run the spell checker within the same process.
To day modern systems allow a process to have multiple threads per process. A Java run-time environment is an example of a system of one process with multiple threads.
The multiprogramming system allows the CPU to be shared by multiple processes, for example UNIX supports multiple users’ processes but each process has only one thread.
Some operating system allows multiple process and multiple threads per process like Windows, Linux and OS/2 etc.
User level and kernel level threads
The support for thread may be provided either at user level or by kernel level. Therefore threads may be divided into two categories.
  1. User-level Threads
  2. Kernel-level Threads
User-level Threads
In user level threads all work of threads management is done by the application without kernel support. Any application can be programmed to be multithreaded by using a threads library, which has a collection of routines that can be sued for user level threads management. For example threads library contains the routines for
  • Creating and destroying threads
  • Passing data between threads
  • Scheduling threads
  • Saving and restoring states of threads etc.
Once and application is loaded into user space, all the threads are created and managed within the user space of a process, by default an application begins with a single thread and continuously running in that thread. This application and its thread are managed by the Kernel. At any time the application may create many threads within the same process. A separate data structure for each thread also created. Some scheduling algorithm is also used to pass control to one thread to another within the process; when control is switched from one thread to another that state of the currently executed is saved and then restored which control returns back.
Kernel-level thread
In kernel level thread all work of threads management is done by the kernel of operating system. There is no thread management code in user space of the process. In this approach when control is switched form one thread to another within the same process then control also has to switch form user mode to kernel mode. So this approach affects the performance of the system. Scheduling by the kernel is done on a thread basis. Most of the operating system support kernel threads. These systems are windows XP, Linux and Solaris etc.
Multithreading Models
There must exist a relationship between user threads and kernel threads. These relationships can be established by using the following models.
  1. Many to One model
  2. One to One model
  3. Many to Many model
Many to one model
In many to one model a user space is allocated to single process and multiply threads may be created and executed within that process. The application and its threads are managed by the kernel. The many to one model maps many user level threads to one kernel thread. Thread management is done by the thread library in user space. In this model the entire process will be block if a thread makes a blocking system call. Only one thread can access the kernel at a time so it is impossible to run multiple threads in parallel on multiprocessors system.
One to One model
The one to one model maps each user level thread to one kernel thread. This modl provides more concurrency than many to one model, because it allows another thread to run when a thread makes a blocking system call. This model also allows multiple threads to run in parallel on multiprocessors system.
Many to Many models
The Many to Many models multiplexes many user level threads to smaller equal number of kernel threads. The number of kernel threads may be specific to either a paruvullar application or a particular machine.

 

1:48 PM

Inter-Process Communication

3 Comments »

Inter-Process Communication

A mechanism through which data is shared among the process in the system is referred to as Inter-process communication. Multiple processes communicate with each other to share data and resources. A set of functions is required for the communication of process with each other. In multiprogramming systems, some common storage is used where process can share data. The shared storage may be the main memory or it may be a shared file. Files are the most commonly used mechanism for data sharing between processes. One process can write in the file while another process cam read the data for the same file.
Various techniques can be used to implement the Inter-Process Communication. There are two fundamental models of Inter-Process communication that are commonly used, these are:
1. Shared Memory Model
2. Message Passing Model
·        Shared Memory Model
In shared memory model. The co operating process shares a region of memory for sharing of information. Some operating systems use the supervisor call to create a share memory space. Similarly, Some operating system use file system to create RAM disk, which is a virtual disk created in the RAM. The shared files are stored in RAM disk to share the information between processes. The shared files in RAM disk are actually stored in the memory. The Process can share information by writing and reading data to the shared memory location or RAM disk.
·        Message Passing Model
In this model, data is shared between process by passing and receiving messages between co-operating process. Message passing mechanism is easier to implement than shared memory but it is useful for exchanging smaller amount of data. In message passing mechanism data is exchange between processes through kernel of operating system using system calls. Message passing mechanism is particularly useful in a distributed environment where the communicating processes may reside on different components connected by the network. For example, A data program used on the internet could be designed so that chat participants communicate with each other by exchanging messages. It must be noted that passing message technique is slower than shared memory technique.
  • A message contains the following information:
  • Header of message that identifies the sending and receiving processes
  • Block of data
  • Pointer to block of data
  • Some control information about the process
Typically Inter-Process Communication is based on the ports associated with process. A port represent a queue of processes. Ports are controlled and managed by the kernel. The processes communicate with each other through kernel.
In message passing mechanism, two operations are performed. Theses are sending message and receiving message. The function send() and receive() are used to implement these operations. Supposed P1 and P2 want to communicate with each other. A communication link must be created between them to send and receive messages. The communication link can be created using different ways. The most important methods are:
  1. Direct model
  2. Indirect model
  3. Buffering   
7:46 PM

Process Control Block

1 Comments »

Process Control Block

The data structure that stores information about a process is called a Process Control Block (PCB). In a computer system, each process is represented by a Process Control Block (PCB). It is also referred to as Task Control Block. The PCB contain the information about the process. It is the central store of information that allows the operating system to locate all the key information about a process. When CPU switches from one process to another, the operating system uses the Process Control Block (PCB) to save the state of process and uses these information when control returns back process is terminated, the Process Control Block (PCB) released from the memory. The information stored in the Process Control Block in given below
Process State
It indicates the information about the state of process such as blocked ready running etc.
Process ID
Each process is assigned a unique identification number, when it is entered into the system.
Program Counter
It indicates the address of the next instruction to be executed.
CPU Registers
It indicates the information about the contents of the CPU registers. The information of CPU registers must be saved when an interrupt occurs, so that the process can be continued correctly afterward. Registers hold the processed the result of calculations or addresses pointing to the memory locations of desired data.
CPU Scheduling Information
It indicates the information needed for CPU scheduling such as process priority, pointers to scheduling queues and other scheduling parameters.
Memory Management Information
It indicates the information needed for memory management such as value of the base and limit registers, page tables or segment tables, amount of memory units allocated to the process etc.
Accounting Information
It indicates the information about process number, CPU used by the process time limits etc.
I/O Status Information
It indicates the information about I/O devices allocated to the process a list of open files access rights of files opened and so on,
Link to Parent Process
A new process can be created from existing process; the existing process is called the parent process of the newly created process. The address of the PCB of parent process is stored.
Link to Child Process
The addresses of the PCBs of the child processes in the main memory are stored.
6:14 PM

What is Process?

7 Comments »

What is Process?

A process is just an executing program. In other words, a program in execution is called a process. The term process was first used by the designers of multics system in 1960s. since that time a process was considered as an activity to perform a task. A process also includes:
  • Current activity, which is represented by current values of program counter, CPU registers and variables.
  • Process stack, which contains temporary data such as parameters of procedure, return address and local variables.
  • Heap, which is memory that is dynamically allocated during program execution time.
  • Operating resources allocated to it.
It must be noted that a program is not a process itself. It is static entity stored in a file on the disk. A process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process, when it is loaded into the memory for execution.
There may be multiple process for the same program. The process are controlled and scheduled by the operating system conceptually. Each process has its own virtual cpu, but in reality a single CPU switches from process to process in a cycle. This switching form process to process is called multiprogramming.
Process States:
In computer system, a process changes states during execution. Various events can cause a process to change states. A process exists in one of the five states. These are
  • New: this star of process indicated that the process has just been created.
  • Blocked: in this state the process is blocked and is waiting for some event to occur before it can continue executing.
  • Ready: in this state the process is not allocated to the processor for running but it is ready to run.
  • Running: the process is executing on processor, if a system has one CPU then only one process can be in the running state.
  • Terminated: the process has finished its execution on the cpu but the record of the process is still maintained by the operating system.
Transition between process states:
Five transitions are possible among these four states as shown below
  • Running- Blocked: This transition occurs, when a running process must executes a system call such as for performing I/O operation, communicating with other processes etc. in such cases, continuing execution of the process must be blocked.
  • Running- Ready: This transition occurs when the scheduler decides that the running process has used its time slice ( time allocated the process) it moves the process form running state to ready stare and runs another process.
  • Ready- Running transition: This transition occurs when CPU scheduler selects a process for running from the ready queue and sent to the CPU for execution.
  • Blocked- Ready: This transition occurs when a process is moved form blocked state to ready state. If no other process is running at that instant, transition 3 will be triggered immediately.
  • Running- Terminated: This transition occurs, when a running process is terminated. A process may be terminated by either performing some illegal action or by completing the execution of its code.
2:28 PM