fork-exec

What are the Fork and Exec system call?

In UNIX, each process is identified by its process identifier, which is a unique integer. A new process is created by the fork system call. The new process consists of a copy of the address space of the original process.

This mechanism allows the parent process to communicate easily with its child process. Both processes continue execution at the instruction after the fork system call, with one difference ——-

The return code for the fork system call is zero for the new (child) process, whereas the (nonzero) process identifier of the child is returned to the parent.

Typically the exec system call is used after a fork system call by one of the two processes to replace the process memory space with a new program.

In the multithreaded program, the semantics of the fork and exec system call change. If one thread in a program calls fork, does the new process duplicate all threads or is the new process single-threaded?

Some Unix system has chosen to have two versions of the fork, one that duplicates all threads and another that duplicates only the thread that invoked the fork system call.

If a thread invokes the system call, the program specified in the parameter to exec will replace the entire process- including all threads and LWPs.

The usage of the two versions of fork depends upon the application. If exec is called immediately after forking, then duplicating all threads is unnecessary, as the program specified in the parameters to exec will replace the process.

In this instance, duplicating only the calling thread is appropriate. if, however, the separate process does not call exec after forking, the separate process should duplicate all threads.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about the Fork() and Exec() system call.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

multithreaded-process-benefits

Multithreaded Process: Benefits & Models

Hello Friends, In this blog post I am going to explain to you the benefits of the multithreaded process and types of its model.

Benefits: The benefits of multithreaded programming can be broken into the following categories –

Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space.

Responsiveness: multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.

Economy: It is costly to allocate memory and resources for process creation. Alternatively, it is more economical to create and context switch threads because threads share resources of the process to which they belong.

Utilization of multiprocessor architecture: The benefits of multithreading can be increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor.

Types of model:

There are many systems that provide support for both user and kernel threads, resulting in different multithreading models. There are following types of multithreading implementation –

Many to one model:

The many to one model maps many user-level threads to one kernel thread as shown in fig 1 thread management is done in userspace, so it is efficient, but the entire process will block if a thread makes a blocking system call. Also, multiple threads are unable to run in parallel on a multiprocessor, because only one thread can access the kernel at a time. For example, green threads- a thread library available for Solaris 2- uses this model.

multithreaded-process

One to one model:

The one to one model maps each user thread to a kernel. It provides more concurrency than the many to one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors. The drawback to this model is that creating a user a user thread requires creating the corresponding kernel thread.

Because the overhead of creating kernel threads can burden the performance of an application, implementation of this model restricts the number of threads supported by the system. For example, Windows NT, Windows 2000 and OS/2 implement this model.

Many to many models:

The many to many model multiplexes many user-level threads to a smaller or equal number of kernel threads. The number of kernel threads may be specific to either a particular application or a particular machine. Whereas the many many to model allow the developer to create as many user threads as he wishes, true concurrency is not gained because the kernel can schedule only one thread at a time.

manytomany-multithreading

The one to one model allows greater concurrency, but the developer has to be careful not to create so many threads within an application. The many to many models do not suffer from these drawbacks.

Developers can create as many user threads as required, and the corresponding kernel threads can run in parallel on a multiprocessor. For example, salaries 2 IRI X and Tru64 UNIX implement this model.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about multithreaded process benefits and types of multithreaded process models.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

process-feature

Process: The Process States In Operating System.

Hello Friends, In this blog post I am going to let you know about the process and different types of process states which are used in the operating system.

A process is a program in execution. It consists of an executable program, the program’s data and stack, it’s program counter, stack pointer, and other registers, and all the other information needed to run the program.

A program by itself is not a process. A program is a passive entity, such as the content of a file stored on disk whereas a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources.

States of the process:

A process goes through a series of discrete process states. various events can cause a process to change states. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states:

New: The process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event to occur such as an I/O completion or reception of a signal.
Ready: The process is waiting to be assigned to a processor.
Terminated: The process has finished execution.

A state diagram corresponding to these states is shown in fig 1 below.

process-states

Process control block:

Each process is represented in the operating system by a process control block (PCB) or a task control block. A PCB is shown in fig 2.

It is a data structure containing certain important information about the process including the following.

processcontrolblock

Process state: The state may be new, ready, running, waiting, halted and so on.

Program counter: The counter specifies the address of the next instruction to be executed for this process.

CPU Register: They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved to continue the process correctly afterward, when an interrupt occurs.

CPU – Scheduling information: It includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

Memory Management Information: It includes information such as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the operating system.

Suspend(Processname): Ready -> suspended ready
A suspended ready process may be made ready by another process. It makes the transition.

Resume(Processname) : suspended ready -> ready
A blocked process may be suspended by another process. It makes the transition.

Suspended(Processname): blocked -> suspended blocked
A suspended blocked process may be resumed by another process. It makes the transition.

Resume(Processname) : suspended blocked -> blocked

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about the process and its different types of states in the operating system.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

whatisthread

What is a thread in OS & how are these different from the traditional process?

Hello Friends, In this blog post I am going to discuss a thread in the operating system, and we will also compare this thread with a traditional process and will see how these differ from each other. We will understand this thread with a suitable example.

In a traditional operating system, each process has an address space and a single thread of control. In fact, that is almost the definition of a process.

Nevertheless, there is frequently a situation in which it is desirable to have multiple threads of control in the same address space running in quasi-parallel as though they were separate processes.

A thread sometimes called a lightweight process (LWP), is a basic unit of CPU utilization, it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process it’s code section, data section, and other operating system resources, such as open files and signals.

A heavyweight (or traditional) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time. The difference between a traditional single-threaded process and a multithreaded process is shown in fig 1.

Many software packages that run on modern desktop PC’s are multithreaded. an application typically is implemented as a separate process with several threads of control. For example, a web browser might have one thread for display image or text while another thread for retrieve data from the network.

A word processor may have a thread for displaying graphics, another thread for reading keystrokes from the user, and a third thread for performing spelling and grammar checking in the background.

threadvssingleprocess

Different threads in a process are not quite as independent as different processes. All threads have exactly the same address space, which means they also share the same global variables. Since every thread can access every memory address within the process address space, one thread can read, write, or even completely wipe out another thread’s stack.

There is no protection between the threads because (1) it is impossible and (2) it should not be necessary. Different processes, which may be from different users and which may be hostile to one another, a process is always owned by a single user, who has presumably created multiple threads so that they can cooperate.

In addition to sharing an address space, all the threads share the same set of open files, child processes, alarms, and signals, etc. as shown in fig2.

The item in the first column are process properties, not thread properties

threadcontent

Like a traditional process, a thread can be in any one of the several states running, blocked, ready or terminated. A thread which currently has CPU is called active, A blocked thread is waiting for some event to unblock it.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about thread vs traditional process.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

deadlockavoidance

Deadlock Avoidance In Operating System

Deadlock avoidance allows the three necessary conditions mutual exclusion , hold and wait , no preemption , but makes the judicious choice to assure that deadlock point is never reached. it means that avoidance allows more concurrency than prevention.

The basic idea for deadlock avoidance is to grant only those requests for available resources that can not possibly result in a state of deadlock. This strategy is implemented by having the resource allocator examine the effects of granting a particular request.

If granting of resources can not lead to deadlock, the resource is granted to the requestor. otherwise, the requesting process is suspended until such time when its pending request can be safely granted.

In order to evaluate the safelty of the individual states , deadlock avoidance requires all process to state their maximum number of resources of each type requirements prior to execution.

A deadlock avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resources, and the maximum demands of the process.

Safe and unsafe state: A state is safe if the system can allocate resources to each process in some order and avoid a deadlock. In other words, a system is in a safe state only if there exists a safe sequence.

A sequence of processes < p1, p2,….pn > is a safe sequence for the current allocation state if , for each pi , the resources that pi can request can be satisfied by the currently available resources plus the resources held by all the pj, where j<i. If no such sequence exist , then the system state is said to be unsafe.

A safe state is not a deadlock state. Conversely a deadlock state is an unsafe state. However, all unsafe states are no deadlocks.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about deadlock avoidance.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

deadlockprevention

Methods For Deadlock Prevention

Deadlock prevention is an approach used by designers in dealing with the problem of deadlock. The basic philosophy of deadlock prevention is to delay at least one of the four necessary conditions for deadlocks.

1. Mutual exclusion:

The mutual condition must hold for nonsharable resources, such as a printer that can not be simultaneously shared by several processes. On the other hand, sharable resources do not require mutually exclusive access and thus can not be involved in a deadlock. for example, read-only files can not be accessed simultaneously by several processes.

To prevent mutual exclusion, avoid assigning a resource when that is not absolutely necessary, and try to make sure that a few processes may claim the resource as possible as.

2. Hold and wait:

The hold and wait condition can be eliminated by forcing a process to release all resources held by it whenever it requests a resource that is not available. There are two protocols to implement this strategy

  • The process requests all needed resources before it begins execution. We can implement this provision by requiring that system calls requesting resources for a process procede all other system calls.
  • Second protocols allow a process to request resources only when the process has none. A process requests resources and use them. Before it can request any additional resources, however, it must release all the resources that are currently allocated.

These protocols have two main advantage:

  • Resource utilization may be slow, since many of the resources may be allocated but unused for a long period.
  • starvation is poosible.

3. No preemption:

This condition can be prevented using following protocols. If a process holding certain resources is denied a further request, that process must release its original resources, if necessary, request them again together with the additional resource.

Alternatively, if a process requests a resource that is currently held by another process, the operating system may preempt the second process and require it to release its resources and allocate them to the requesting process.

This latter scheme prevents deadlock only if no two processes processed the same priority.

This protocol is practical only when applied to resources whose start can be saved and restored later, such as CPU registers and memory space.

3. Circular wait:

The circular wait condition can be prevented by defining a linear ordering of resource types. If a process has been allocated resources of type R, then it may be subsequently request only those resources of types following R in the ordering.

A disadvantage of this approach is that resources must be acquired in the prescribed order as apposed to being requested when actually needed. This may cause some resources to be acquired in advance of their use, Thus lowering the degree of concurrency by making unused resources unavaliable for allocation to other processes.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about deadlock prevention methods.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

deadlockhandlingmethods

Methods For Deadlock Handling

There are three methods for deadlock handling:

  1. Ensuring that deadlock state will never occur – To make sure this condition there are two techniques used which is explained below:

Deadlock prevention:

In this scheme, deadlock is prevented by ensuring that at least one condition occurs out of four conditions (mutual exclusion, hold and wait, no preemption, circular wait) required for occurring deadlock state.

Deadlock avoidance:

This scheme requires additional information about how resources will be requested by a process in its lifetime. By this information, it takes a decision whether a resource should be allocated to the processor it should wait.
Here, all available resources are allocated currently and resources will be requested in the future are kept in the account.

2. After occuring deadlock:

In this system we allow the system to enter a deadlock state, detect it, and recover. In this enviroment the system can provide an algorithm that examine the state of the system to determine whether a deadlock has occurred, and an algorithm to recover from the deadlock.

3. Some system does not ensure that a deadlock will never occur.

And also does not provide a mechanism for deadlock detection and recovery, then we may arrive at a situation where the system is in deadlock state yet has no way of recognizing what has happened.


This method is not viable, approach to the deadlock problem, however, in many system deadlock occur infrequently, once per year, thus this method is cheaper than the costly deadlock prevention, deadlock-avoidance, or deadlock detection and recovery methods that must be used constantly. Also, in some circumstances the system is in a frozen state but not in a deadlock state.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about deadlock handling.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

dijkstra-bankers-algorithm

Dijkstra: Bankers Algorithm For Deadlock Avoidance

If a necessary conditions for a deadlock to occur are in free place, it is still possible to avoid deadlock by being careful when resources are allocated, perhaps the most famous deadlock avoidance algorithm is Dijkstra’s bankers algorithm, called by this interesting name because it involves a banker who makes loans and receives payment from a given sources of capital.

Dijkstra’s algorithm is used for deadlock avoidance. The statement and assumption of bankers’s algorithm are given below:

The assumption for (Dijkstra’s )banker’s algorithm is as follows:

  • Every process tells in advance, the number of resources of each type it may require.
  • No process asks for more resources than what the system has.
  • At the termination time every process will release resources.

Statement: Resources for a process are allocated so that transition is always from a safe state to another state.

According to this Dijkstra’s (banker’s) algorithm when a new process enters the system, it must declare the maximum number of instances of each resource type that it may need.

This number may not exceed the total number of resources in the system. When a user requests a set of resources, the system must determine whether the allocation of these resources will leave the system in a safe state.
if it will, the resources are allocated, otherwise the process must wait until some processes release enough resources.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about Dijkstra’s banker’s algorithm for deadlock avoidance.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

segmentation-feature

What is segmentation in the operating system?

Hello Friends, In this blog post, I am going to discuss the segmentation process. Segmentation is a memory management scheme that divides the address space of a single process into blocks that may be placed into noncontiguous areas of memory to reduce the average size of a request.

In segmentation, Segments are formed at program translation time by grouping together logically related items. For example, a typical process may have separate code, data, and stack segments.

Data or code shared with other processes may be placed in their own dedicated segments in segmentation. Being a result of a logical division, individual segments generally have different sizes.

Although different segments may be placed in separate, noncontiguous areas of physical memory, items belonging to a single segment must be placed in contiguous areas of physical memory. Thus, segmentation possesses some properties of both contiguous and noncontiguous schemes for memory management.

A logical address space is a collection of segments. Each segment has a name and a length. The addresses specify both the segment name and the offset within the segment. The user, therefore, specifies each address by two quantities – a segment name and an offset.

For simplicity of implementation, segments are numbered and are referred to by a segment number, rather than by a segment name. Thus a logical address consists of two tuples.

<segment number, offset>

Since physical memory in segmented systems generally retains its linear array organization, some address translation mechanism is needed to convert a two-dimensional virtual segment address into its unidimensional physical equivalent.

Thus, we must define an implementation to map two-dimensional user define addresses into one-dimensional physical addresses. This mapping is affected by a segment table.

Each entry of the segment table has a segment base and segment limit. The segment base contains the starting physical address where the segment resides in memory whereas the segment limit specifies the length of the segment.

Fig 1 shows the use of a segment table. A logical address consists of two parts – a segment number, s, and an offset into that segment, d. The segment number is used as an index into the segment table.

segmentation-harware
Fig 1 Segmentation Hardware

The offset d of logical address must be between 0 and the segment limit. if it is not, we trap the operating system that is a logical addressing attempt beyond the end of the segment.

If this offset is legal, it is added to the segment base to produce the address in the physical memory of the desired byte.

Fig2 shows the situation with five segments numbered from 0 through 4. The segment table has a separate entry for each segment giving the beginning address of the segment in physical memory(or base) and the length of that segment(or limit).

segment-example
Fig 2 Segmentation Example

Segment 2 is 4 bytes long and begins at location 4300. Thus a reference to byte 53 of segment 2 is mapped onto location 7300+53=4353. A reference to byte 1222 of segment 0 will result in a trap to the operating system because this segment is only 1000 bytes long.

The implementation of paging differs from paging in an essential way, pages are fixed size and segments are not in segmentation.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about Segmentation along with its hardware and example.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

starvationvsdeadlock

What is Starvation? How does it differ and similar to deadlock?

Hello friends, In this blog post I am going to let you about the problem of starvation which is also known as indefinite postponement, indefinite blocking or starvation.

In any system that keeps processes waiting while it makes resources allocation and process scheduling decisions, it is possible to delay indefinitely the scheduling of a process while other processes receive the system attention. This results in a situation like starvation.

indefinite may occur because of biases in a system’s resource scheduling policies. When resources are scheduled on a priority basis, it is possible for a given process to wait for resources indefinite as processes with higher priority continue arriving. Waiting is an important aspect of what goes on inside the computer system.

The system should be designed to manage the waiting process fairly as well as efficiently. In some systems, indefinite postponement is prevented by allowing a process’s priority to increase as it waits for a resource. This is called aging. Eventually, that process’s priority will exceed the priorities of all incoming processes, and the waiting process will be serviced.

Difference between deadlock and starvation:

Deadlock is a situation where a process is waiting for an event that will never occur. Whereas starvation is a situation where a process is waiting for an event that occurs but is always affecting other processes.

Similarities between deadlock and starvation:

In both deadlock and starvation, a process is waiting for an event to occur.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about starvation, indefinite postponement, indefinite blocking, and deadlock.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!