graph-feature-img

Graph: Types of the graph in data structure 2019?

A graph is a structure containing (V, E) set of objects. Here V represents the vertices or node and E represents the edges or arcs. Say V= {v1, v2, v3, v4, v5} and E = {e1, e2, e3, e4}. Each edge Ek is identified with an unordered pair (Vi, Vj) of vertices. The vertices Vi, Vj associated with edge Ek are called the end vertices of Ek. You can see the graph below.

graph1
Fig1: Graph

An edge which has same vertices as both of its end vertices is called self-loop. Edge e1 is the self-loop in Fig1 above. And if more than one edges are associated with one set of end vertices then vertices are referred to as parallel edges. Edges e4 and e5 in the above figure are parallel edges.

Types of Graph:

On the basis of vertices order and edges, connections graph is also classified into a various category.

Undirected Graph:

A graph is called undirected graph if there is the same edge say Ek identified with a different pair of vertices say (Vi, Vj) and (Vj, Vi).

undirected-graph
Fig2: Undirected Graph

Let’s understand this with a more simple explanation. When we go from V1 to V2 and V2 to V1 in both the case we get same edge e1 as the resulting edge.

Set of vertices V= {v1, v2, v3, v4}

Set of edges E = {e1, e2, e3, e4}

In this graph pair (v1, v2) and (v2, v1) represent the same edge.

Directed Graph:

In a directed graph, every edge of the graph is identified by an ordered pair of vertices. it is also known as a digraph, so don’t be confused with this other name of a directed graph.

directed-graph
Fig3: Directed Graph

In this graph, there exists some mapping, that maps each edge of a graph onto some ordered pair of vertices(Vi, Vj). The first element of an ordered pair of vertices is referred to as the start vertex and the second element is known as end vertex. Fig3 shows the directed graph.

Vertex pair (Vi, Vj) reads as Vi-Vj an edge is directed from Vi to Vj.

Complete Graph:

A simple graph In which there exists an edge between every pair of vertices is called a complete graph. A complete graph is also known as a universal graph or a clique. Fig 4 shows complete graphs of two, three, four and five vertices.

complete-graph
Fig4: Complete Graph

Connected Graph:

A graph G is known as a connected graph if there is at least one path between every pair of vertices in G. Otherwise, G is disconnected. See the below image for a connected graph.

Fig5: Connected Graph

Multi Graph:

A graph which contains a pair of nodes joined by more than one edge is called a multigraph and such edges are called parallel edges. See the image below for the multigraph.

multigraph
Fig6: Multigraph

Conclusion:

In this above blog post, you have learned about the graph and types of graph in the data structure. A graph is a shape which is built with the help of vertices or node and edges. and on the basis of their property, they are further classified into different types like a simple graph, undirected graph, directed graph, connected graph, complete graph, and multigraph. With a change in the order of vertices and edges connection a graph nature changes and it is known by some other name. A graph is used for solving many mathematical problems and it is used to get an optimal solution for a problem.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about the graph in the data structure and types of graph or classification of a graph.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

avl-featureimg

AVL Tree In Data Structure?

AVL stands for Adel’son – Vel’skii and Landis, These two were the Russian mathematician having full names as G.M. Adel’son Vel’skii and E.M. Landis. They both described the property in 1962 about the height balance tree Which was honored as AVL TREE.

As per their theory or property, a height balance tree is a binary tree in which the difference in heights between the left and right subtree is not more than one for every node. And as an honor to these mathematicians, this resulting binary tree is also known as AVL tree.

In order to maintain the height-balanced property in AVL Tree, it is not necessary to know the height of each subtree. We can get by maintaining at each node a balance factor that indicates the difference in heights of the left and right sub-trees.

Each node is a balanced binary tree has a balance of 1, -1, or 0 depending on whether the height of its left subtree is greater than, less than or equal to the height of its right subtree.

A value of 1 indicates that the left child is heavier and there is a path from the root to leaf in the left child subtree of length n, whereas the longest path in the right child subtree is length n-1.

A balancing factor of 0 will indicate that the longest paths in the two-child subtree are equal. A balancing factor of -1 indicates that the right child possesses the longest path.

avl-tree
AVL TREE Example

Fig1 shows a balanced binary tree with a balance factor of each node or an AVL TREE. In a balanced tree, each node must be in one of the three states -1, 0, or 1. If there exists a node in a tree where this is not true, then such a tree is said to be unbalanced. Fig2 shows an unbalanced binary tree.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about AVL TREE.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

tree-indatastructure-feature

Tree: What is a tree in data structure 2019?

The tree is considered as a most flexible, versatile and powerful data structure which is used in representing the data processing in a hierarchical relationship. It is one of the important data structure in the field of computer science.

We can also define a tree as a finite set of one or more data item which is also called node such that-

  • There is a special node called the root of the tree.
  • The remaining node is partitioned into n>=0 disjoint subset, each of which is itself a tree, and they are called subtree.

In simpler words, a tree is a non-linear data structure in which items are arranged in a stored sequence. it is used to represent hierarchical relationship existing amongst several data items.

Fig 1 shows a tree with 18 nodes. the root node N1 has three subtrees T1, T2, T3 and roots of these subtrees are N2, N3, N4 respectively.

The number of subtrees of a node is called its degree, root N1 has degree 3. A node with degree zero is called a leaf node. The leaf nodes are also called terminal nodes.

tree-in-data-structure
Tree

If x is the root node and its subtrees are T1, T2, T3, and roots of subtrees are R1, R2, R3 then R1, R2, R3 are the children of x. In turn, x is called the parent of R1, R2, and R3. Children of the same parent are called siblings.

In Fig1 N1 is the parent of N2, N3, N4 hence N2, N3, N4 are siblings. The Degree of a tree is the maximum degree of the node in the tree. The tree in Fig1 has degree 5.

The ancestors of a node are all the nodes along the path from the root to that node. The ancestor of N14 is N5, N2, and N1. The descendants of a node are all the nodes along the path from a node to the terminal node. The descendants of N1 are N2, N5, and N14.

Each node of a tree is assigned a level number as follows – The root node R of the tree is assigned a level number 0, and a node is assigned a level number, which is one more than the level number of its parent(root) of the subtree to which it belongs. The nodes, which are at the same level numbers, are said to be of the same generation.

The height or depth of a tree is the maximum number of nodes in a branch. This turns out to be one more than the largest number of tree.

Conclusion:

A tree is a strong data processing schema of data structure that is used to store the data item inside it and process it as per requirement. There are a few tree traversing technique which is used to process the data. A degree of the tree is calculated as the total number of subtree or node attached to that node in a tree.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about a Tree in the data structure.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

compaction-feature

Compaction: Dynamic memory management.

In this tutorial, I am going to explain one of the methods of dynamic memory management that is name as compaction. In this process, all marked or used node is shifted to one end and variable or available memory is kept at another end.

In a very simple language, the compaction is a process in which we mode all the used node to the one end of the memory and all the available memory to the other end. And the algorithm to perform such operation is called compaction algorithm.

In this scheme initially, memory is one large block of available storage. When requests arrive for storage, a block of memory is allocated sequentially starting from the first location in the memory as shown in Fig1.

A variable free point contains the address of the first location following the last block allocated as in Fig1, a free point equals 950. When a block is freed, the free point remains unchanged and no combination of free spaces take place.

compaction-explanation

When a block of size n is allocated free point is increased by n. This continues until a block of size n is requested and free point+ n-1 is larger than the highest address in the memory. The request can not be satisfied without further action being taken.

At that point, user routines come to a halt and system compaction routine is called. this routine copies all allocated blocks into sequential memory locations starting from the lowest address in the memory.

Thus all free blocks that were interspersed with allocated blocks are eliminated, and the free point is reset to the sum of the size of all the allocated blocks. thus, one large free block is created at the upper end of the memory and the user request may be filled if there is sufficient storage available. this process is shown in Fig1 on a memory of 1024 words.

When allocated blocks are copied into a lower portion of memory, special care must be taken so that pointer value remains correct. for example, the contents of memory location 420 in allocated block 2 of Fig1 might contain the address340. After block2 is moved to locations 125 through 299, 420 to 220, those content must be changed to 140.

Conclusion:

Memory compaction is the process to shift all the free memory at one end and used or marked node at one end so that when a request memory is less than the size of free pool memory then we can easily allocate the memory to that block.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about memory compaction.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

garbage-collection

What is Garbage Collection?

Garbage collection is a method to detect and collect free nodes. The node which is in use and allocated are not detected whereas any no longer in use is recovered.

When a request is made for additional nodes and there are none available, then a system routine is called garbage collector searches through all of the nodes in the system and identifies those that are no longer accessible from an external pointer, and restores the inaccessible nodes to the available pool.

The Process of garbage collection takes place into two-phase. The first phase is called making phase, where all nodes are accessible from an external pointer are marked.

The second phase is called the collection phase, involves proceeding sequentially through memory and freeing all nodes that have not been marked. This second phase is trivial when all nodes are of fixed size. In this case, the second phase requires only the examination of each node to see whether or not it has been marked.

If there are a total of n nodes then the second phase of garbage collection can be carried out in O(n) steps. In this situation, it is only the first or marking phase that is of any interest in designing the algorithm.

When variable size nodes are in use, it is desirable to compact memory so that all free nodes from a contiguous block of memory. In this case, the second phase is referred to as memory compaction of disk space to reduce average retrieval time is desirable even for fixed-size nodes.

Conclusion:

Garbage collection is a simple phenomenon of releasing the memory from those objects which are no longer used in the program. When we create a variable function or any object while developing the program then once the use of these objects is over the garbage collector detects all such object and free the memory block by taking the resource from these objects, so that it can be used by another variable or function.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about Garbage collection.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

dynamicmemorymanagement

Dynamic Memory Management

Dynamic memory management is the memory which is given instantly to a program variable or data structure when it is needed. Here we are explaining this dynamic memory management phenomena with reference to the data structure.

There are many occasion while we don’t know about the memory needed by the program or program variable before compiling the program and running the program. So this dynamic memory management helps us to handle the situation at compile time or static memory allocation. And it is the best help for us when we need a chunk of memory at run time of the program. In the most real-life problem, we need dynamic memory management.

In the multiprocessing computer environment, several programs reside at memory at the same time. a different program has different memory requirements.

For example, one program may require 50k of memory another require 100k of memory and yet another requires 200k of memory. Whenever there is a requirement by the operating system then it should be able to provide the contiguous block of memory of the right size.

When the execution process for a program is completed then it releases the memory and frees the block allocated to it and this freed block may be allocated to another program which needs it.

In the dynamic processing environment, the required memory is not known before the time of actual execution. Moreover, blocks of memory will, in general, be freed in some order different from that in which they are allocated.

At the start of the computer system, no jobs are in memory, so the whole memory says size M words are available for allocating to the programs. Now the jobs are submitted to the computer and the request is made for variable-size blocks of memory.

A large contiguous amount of storage may be required by a particular program such as a large array. So it would be impractical to entertain such a large block at a time, similarly, a program may require a storage block in a large variety of sizes. In such cases, a dynamic memory management system must be able to process requests for a variable length of a block.

For example, consider a small memory of 1024 words, suppose a request is made for three blocks of storage that are 348, 110, and 212 words. it is assumed that these blocks are allocated sequentially as shown in Fig1 below.

dynamicmemory-managementnew

Now suppose the second block of size 110 is freed, resulting in the situation shown in Fig2. Now the 464 words of free space are available in noncontiguous blocks. Hence the request for 400 words can not be satisfied.

Suppose that block 3 is now freed clearly, it is not desirable to retain three blocks of 110, 212, and 354 words. Rather the blocks should be combined into a single large block of 676 words so that the further large request can be satisfied as shown in Fig3.

Conclusion

Thus this example of dynamic memory management shows us the necessity to keep the track of available space, to allocate portions of that space when allocation requests are made and to combine contiguous free spaces when a block is freed. If memory is free then it should be in a chunk so that it can be used by the program needed a large block of memory.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about dynamic memory management.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

josephus-problem-feature-image

Josephus problem & Circular linked list?

In this blog post, I am going to discuss an interesting Josephus problem which you would have studied in the data structure. This problem includes soldiers surrounded by the enemy and only one horse which a single soldier can use to escape out of the enemy. you will also learn how to solve this Josephus problem using a circular linked list.

What is the Josephus problem?

In Joseph problem, few soldiers are surrounded by the enemy and there is only one horse with the soldiers. So only one soldier can escape out of this enemy area or far from the enemy. Now the problem amongst the soldiers is that who will escape using this horse out of all the soldiers.

How the Josephus problem is solved using a circular linked list?

This Josephus problem can easily be solved using a circular linked list. For solving this problem soldiers make a circle and then a random number is picked up using their hat out of many number slips. and in the same manner, the name of a soldier is picked up using a hat by choosing a random paper slip out of many inside the hat.

Now the counting starts at clockwise direction starting from the name of soldiers whose name was picked up and then the count goes on until it reaches the random number n that also was picked up. The soldiers whom this n reaches get removed from this circle.

And the count begins again with the next soldier in the clockwise direction and again the soldier is removed when it reaches the count n. And this process goes on until there is one single soldier left in last and that soldier will get the horse and will escape out of this enemy area.

For understanding the Josephus problem better, we take an example, suppose n = 3 and assume there are five soldiers name as A, B, C, D, and E. Now we start counting three soldiers from A as per the value of n that is 3 so we complete this count at (A, B) C so C will be removed here.

Now again we start counting from D and it again Reaches to (D, E) A so this time A will be eliminated. Now we start counting from B and it reaches (B, D) E and we remove E from the circle. Now again start at B and it reaches again to (B, D) B so this time B will be removed from the list. Thus D is the one who will escape out of this enemy area. See the image for the Josephus problem solution below.

josephus-problem
Josephus Solution

So this is the very natural way to solve this Josephus problem using a circular linked list where each node is treated as a soldier as it was possible to reach any node from any other node by counting around the circle. To represent the removal of a soldier from the circle, a node is deleted from the circular list. Finally, when one node is left ultimately we get the final result or soldier to escape.

Conclusion:

Josephus problem is the problems of soldiers who were surrounded by the various enemy with the only horse with them. and now they need to decide whether which soldier will escape using this horse. for solving this problem they make a circle and with the help of a random number and setting a name as a starting point they start counting in a clockwise direction and remove a soldier when they reach the counter. and they repeat this process until one soldier is left and that is the solution to this problem.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about interesting Josephus problem and its solution using a circular linked list.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

circular-linked-list

What is a circular linked list?

A list with various numbers of the node containing data and address pointer and the list start and finish at the same first node is known as a circular linked list. this list forms a circle and having a common start and endpoint.

As in the case of the linked list, we have seen that the first node is pointed by an external pointer and in the last node we have a null value in the field of the address.

We should be very careful while traversing the circular linked list as there is a chance to be stuck in an infinite loop if the end node is unable to detect. To avoid this situation we can set an external pointer at the starting node and now used as the stop point or endpoint of the list.

circularlinkkedlist
Circular linked list

As an alternative method, we can make a header node to our first node and can keep a special value in the info field that will not be connected with other info data in the list, or will not have any relation with another data field. And we will halt traversing circular linked list once this header node is reached.

Conclusion:

Circular linked list in data structure is a list that forms a circle shape while traversing the data element using this list. This circular list has a common starting and ending point which is, in general, is the very first node or header node, which may also be filled with some special data to detect it as a header node where we can stop our traversing.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about a circular linked list.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

Linked-list-feature-img

What is linked list in data structure? 2019

A linked list is the collection of data element or information in a linear fashion which is also known as nodes which are again linked to another node. In this logical ordering, each element is pointed by its previous element except the very first and the last node.

A linked list is the serial connecting chain of nodes, where each node contain two fields one is for data element and another is for address field which points the address of another field.

In the data field, we can have a character, integer, string or any large records. Whereas in address field we will have the address of the next node with the help of an address Pointer.

To access the whole linked list we need an external pointer which has the address for very first node of the linked list. and in the address field of the last node, we have a null value which let us know that this is the end of the list and there is no more node after this. See the image below for understanding it better.

linked-list-detail
Linked list

Advantage of linked list:

A linked list is a dynamic data structure:

As per the heading linked list is a dynamic data structure that is mean it can grow or shrink during the execution of the ram as the requirement of a program.

Efficient memory utilization:

When there is a need for memory then it is allocated to the linked list and when it is not needed then it is deallocated.

Ease in insertion and deletion operation:

Using linked list insertion and deletion operation are much easier.

Complex Application easy solution:

Much complex application is easily handled by linked lists.

The disadvantage of a linked list:

More memory space is needed when there is more number of fields.

Sometimes it is heavy and time-consuming to access the arbitrary data item in the linked list.

Conclusion:

A linked list is a data structure in the list of a connected node in serial order. Each node has two fields. one is for data and another is for address pointer for the next node. The first node of the linked list is pointed by an external pointer. And the last node has a null value in the address field which tells us that this is the end of a linked list.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about a linked list.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!

priority-queue-featureimg

What is a priority queue?

If we are able to insert and remove the element inside a queue at any position based on some specific or predefine priority then this type of queue is known as a priority queue.

You can understand this with an example given in below Fig1 of a printer and few jobs where jobs are waiting to use the printer to complete their print job on a LAN system.

We have declared few priorities say 1, 2, 3, 4 which is attached to jobs of a supervisor, teacher, students, and out-side users respectively. Now if a job is started with a priority p then it is immediately inserted with at the end of thee of other jobs with priority p. And for p = 1, 2, 3, 4 jobs are always removed front the front place fo the queue.

priority_queue
Fig1: Priority Queue

In a generalized language, we can conclude that a priority queue is the series of the queue where we already know the priority of each element associated with the queue. Fig 1.1 below shows the representation of a single priority queue into four separate queues, each of them following a FIFO technique.

queueaccordingpriority
Queue based on priority

When the first queue will be empty then only the element of the second queue will be removed. In the same manner, the element of the third queue will be removed only when the first and second queue will be empty and so on.

This representation of a single priority queue into a series of queues also suggests an efficient storage scenario of a priority queue. on the basis of priority, the element is always added at the end of the queue when they are supposed to add.

If an only a single sequential structure is used for the priority queue then at the time of insertion a new element must be inserted at the middle of the queue. This can require the movement of several elements. Thus it is a better option to split the priority queue into several queues. in which each one will be having its own storage structure.

Conclusion:

In general, a queue is known to store the element in a series and is inserted at one end removed from another end. If we can make this insertion and deletion of the element on the basis of prior priority of each element then this type of queue is known as a priority queue.

In the case of any queries, you can write to us at a5theorys@gmail.com we will get back to you ASAP.

Hope! you would have enjoyed this post about priority queue in data structure.

Please feel free to give your important feedbacks in the comment section below.

Have a great time! Sayonara!