Thursday 23 January 2014

Breadth First Search and Depth First Serach

Introduction

The objective of this article is to provide a basic introduction about graphs and the commonly used algorithms used for traversing the graph, BFS and DFS. Breadth First Search (BFS) and Depth First Search (DFS) are the two popular algorithms asked in most of the programming interviews. I was not able to find a simple, precise explanation for beginners on this topic. So, I decided to write an article for graph. This article will help any beginner to get some basic understanding about what graphs are, how they are represented, graph traversals using BFS and DFS.

What is a Graph?  

Graphs are one of the most interesting data structures in computer science. Graphs and the trees are somewhat similar by their structure. In fact, tree is derived from the graph data structure. However there are two important differences between trees and graphs.
  1. Unlike trees, in graphs, a node can have many parents. 
  2. The link between the nodes may have values or weights.
Graphs are good in modeling real world problems like representing cities which are connected by roads and finding the paths between cities, modeling air traffic controller system, etc. These kinds of problems are hard to represent using simple tree structures. The following example shows a very simple graph:  
In the above graph, A,B,C,D,E,F are called nodes and the connecting lines between these nodes are called edges. The edges can be directed edges which are shown by arrows; they can also be weighted edges in which some numbers are assigned to them. Hence, a graph can be a directed/undirected and weighted/un-weighted graph. In this article, we will discuss undirected and un-weighted graphs.  

Graph Representation in Programming Language 

Every graph has two components, Nodes and Edges. Let’s see how these two components are implemented in a programming language like JAVA.  

1. Nodes   

Nodes are implemented by class, structures or as Link-List nodes. As an example in JAVA, we will represent node for the above graph as follows:
//
Class Node
{
   Char data;
   Public Node(char c)
   {
      this.data=c;
   }
}
//  

2. Edges   

Edges represent the connection between nodes. There are two ways to represent edges.

Adjacency Matrix  

It is a two dimensional array with Boolean flags. As an example, we can represent the edges for the above graph using the following adjacency matrix.
In the given graph, A is connected with B, C and D nodes, so adjacency matrix will have 1s in the ‘A’ row for the ‘B’, ‘C’ and ‘D’ column.
The advantages of representing the edges using adjacency matrix are:
  1. Simplicity in implementation as you need a 2-dimensional array 
  2. Creating edges/removing edges is also easy as you need to update the Booleans 
The drawbacks of using the adjacency matrix are: 
  1. Increased memory as you need to declare N*N matrix where N is the total number of nodes.
  2. Redundancy of information, i.e. to represent an edge between A to B and B to A, it requires to set two Boolean flag in an adjacency matrix. 
In JAVA, we can represent the adjacency matrix as a 2 dimensional array of integers/Booleans.

Adjacency List  

It is an array of linked list nodes. In other words, it is like a list whose elements are a linked list. For the given graph example, the edges will be represented by the below adjacency list:

Graph Traversal  

The breadth first search (BFS) and the depth first search (DFS) are the two algorithms used for traversing and searching a node in a graph. They can also be used to find out whether a node is reachable from a given node or not.  

Depth First Search (DFS) 

The aim of DFS algorithm is to traverse the graph in such a way that it tries to go far from the root node. Stack is used in the implementation of the depth first search. Let’s see how depth first search works with respect to the following graph:  
As stated before, in DFS, nodes are visited by going through the depth of the tree from the starting node. If we do the depth first traversal of the above graph and print the visited node, it will be “A B E F C D”. DFS visits the root node and then its children nodes until it reaches the end node, i.e. E and F nodes, then moves up to the parent nodes.

Algorithmic Steps   

  1. Step 1: Push the root node in the Stack.  
  2. Step 2: Loop until stack is empty. 
  3. Step 3: Peek the node of the stack.  
  4. Step 4: If the node has unvisited child nodes, get the unvisited child node, mark it as traversed and push it on stack.   
  5. Step 5: If the node does not have any unvisited child nodes, pop the node from the stack. Based upon the above steps, the following Java code shows the implementation of the DFS algorithm: 
    //
    public void dfs()
    {
     //DFS uses Stack data structure
     Stack s=new Stack();
     s.push(this.rootNode);
     rootNode.visited=true;
     printNode(rootNode);
     while(!s.isEmpty())
     {
      Node n=(Node)s.peek();
      Node child=getUnvisitedChildNode(n);
      if(child!=null)
      {
       child.visited=true;
       printNode(child);
       s.push(child);
      }
      else
      {
       s.pop();
      }
     }
     //Clear visited property of nodes
     clearNodes();
    }
    
    //   

Breadth First Search (BFS)  

This is a very different approach for traversing the graph nodes. The aim of BFS algorithm is to traverse the graph as close as possible to the root node. Queue is used in the implementation of the breadth first search. Let’s see how BFS traversal works with respect to the following graph:
If we do the breadth first traversal of the above graph and print the visited node as the output, it will print the following output. “A B C D E F”. The BFS visits the nodes level by level, so it will start with level 0 which is the root node, and then it moves to the next levels which are B, C and D, then the last levels which are E and F. 

Algorithmic Steps   

  1. Step 1: Push the root node in the Queue.
  2. Step 2: Loop until the queue is empty.
  3. Step 3: Remove the node from the Queue.
  4. Step 4: If the removed node has unvisited child nodes, mark them as visited and insert the unvisited children in the queue. Based upon the above steps, the following Java code shows the implementation of the BFS algorithm:
    //
    public void bfs()
    {
     //BFS uses Queue data structure
     Queue q=new LinkedList();
     q.add(this.rootNode);
     printNode(this.rootNode);
     rootNode.visited=true;
     while(!q.isEmpty())
     {
      Node n=(Node)q.remove();
      Node child=null;
      while((child=getUnvisitedChildNode(n))!=null)
      {
       child.visited=true;
       printNode(child);
       q.add(child);
      }
     }
     //Clear visited property of nodes
     clearNodes();
    }
    // 

Binary Trees


Binary Trees

Introduction

We extend the concept of linked data structures to structure containing nodes with more than one self-referenced field. A binary tree is made of nodes, where each node contains a "left" reference, a "right" reference, and a data element. The topmost node in the tree is called the root. Every node (excluding a root) in a tree is connected by a directed edge from exactly one other node. This node is called a parent. On the other hand, each node can be connected to arbitrary number of nodes, called children. Nodes with no children are called leaves, or external nodes. Nodes which are not leaves are called internal nodes. Nodes with the same parent are called siblings.    
More tree terminology:
  • The depth of a node is the number of edges from the root to the node.
  • The height of a node is the number of edges from the node to the deepest leaf.
  • The height of a tree is a height of the root.
  • A full binary tree.is a binary tree in which each node has exactly zero or two children.
  • A complete binary tree is a binary tree, which is completely filled, with the possible exception of the bottom level, which is filled from left to right.
A complete binary tree is very special tree, it provides the best possible ratio between the number of nodes and the height. The height h of a complete binary tree with N nodes is at most O(log N). We can easily prove this by counting nodes on each level, starting with the root, assuming that each level has the maximum number of nodes:

n = 1 + 2 + 4 + ... + 2h-1 + 2h = 2h+1 - 1
Solving this with respect to h, we obtain

h = O(log n)
where the big-O notation hides some superfluous details.

Advantages of trees

Trees are so useful and frequently used, because they have some very serious advantages:
  • Trees reflect structural relationships in the data
  • Trees are used to represent hierarchies
  • Trees provide an efficient insertion and searching
  • Trees are very flexible data, allowing to move subtrees around with minumum effort

Traversals

A traversal is a process that visits all the nodes in the tree. Since a tree is a nonlinear data structure, there is no unique traversal. We will consider several traversal algorithms with we group in the following two kinds
  • depth-first traversal
  • breadth-first traversal
There are three different types of depth-first traversals, :
  • PreOrder traversal - visit the parent first and then left and right children;
  • InOrder traversal - visit the left child, then the parent and the right child;
  • PostOrder traversal - visit left child, then the right child and then the parent;
There is only one kind of breadth-first traversal--the level order traversal. This traversal visits nodes by levels from top to bottom and from left to right.
As an example consider the following tree and its four traversals:

PreOrder - 8, 5, 9, 7, 1, 12, 2, 4, 11, 3
InOrder - 9, 5, 1, 7, 2, 12, 8, 4, 3, 11
PostOrder - 9, 1, 2, 12, 7, 5, 3, 11, 4, 8
LevelOrder - 8, 5, 4, 9, 7, 11, 1, 12, 3, 2
   
In the next picture we demonstarte the order of node visitation. Number 1 denote the first node in a particular traversal and 7 denote the last node.
These common traversals can be represented as a single algorithm by assuming that we visit each node three times. An Euler tour is a walk around the binary tree where each edge is treated as a wall, which you cannot cross. In this walk each node will be visited either on the left, or under the below, or on the right. The Euler tour in which we visit nodes on the left produces a preorder traversal. When we visit nodes from the below, we get an inorder traversal. And when we visit nodes on the right, we get a postorder traversal.

Binary Search Trees

We consider a particular kind of a binary tree called a Binary Search Tree (BST). The basic idea behind this data structure is to have such a storing repository that provides the efficient way of data sorting, searching and retriving.
    A BST is a binary tree where nodes are ordered in the following way:
  • each node contains one key (also known as data)
  • the keys in the left subtree are less then the key in its parent node, in short L < P;
  • the keys in the right subtree are greater the key in its parent node, in short P < R;
  • duplicate keys are not allowed.
In the following tree all nodes in the left subtree of 10 have keys < 10 while all nodes in the right subtree > 10. Because both the left and right subtrees of a BST are again search trees; the above definition is recursively applied to all internal nodes:

Implementation

We implement a binary search tree using a private inner class BSTNode. In order to support the binary search tree property, we require that data stored in each node is Comparable:

public class BST <AnyType extends Comparable<AnyType>>
{
   private Node<AnyType> root;

   private class Node<AnyType>
   {
      private AnyType data;
      private Node<AnyType> left, right;

      public Node(AnyType data)
      {
         left = right = null;
         this.data = data;
      }
   }
   ...

}

Insertion

The insertion procedure is quite similar to searching. We start at the root and recursively go down the tree searching for a location in a BST to insert a new node. If the element to be inserted is already in the tree, we are done (we do not insert duplicates). The new node will always replace a NULL reference.
Exercise. Given a sequence of numbers:
11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31
Draw a binary search tree by inserting the above numbers from left to right.

Searching

Searching in a BST always starts at the root. We compare a data stored at the root with the key we are searching for (let us call it as toSearch). If the node does not contain the key we proceed either to the left or right child depending upon comparison. If the result of comparison is negative we go to the left child, otherwise - to the right child. The recursive structure of a BST yields a recursive algorithm. Searching in a BST has O(h) worst-case runtime complexity, where h is the height of the tree. Since s binary search tree with n nodes has a minimum of O(log n) levels, it takes at least O(log n) comparisons to find a particular node. Unfortunately, a binary serch tree can degenerate to a linked list, reducing the search time to O(n).

Deletion

Deletion is somewhat more tricky than insertion. There are several cases to consider. A node to be deleted (let us call it as toDelete)
  • is not in a tree;
  • is a leaf;
  • has only one child;
  • has two children.
If toDelete is not in the tree, there is nothing to delete. If toDelete node has only one child the procedure of deletion is identical to deleting a node from a linked list - we just bypass that node being deleted
Deletion of an internal node with two children is less straightforward. If we delete such a node, we split a tree into two subtrees and therefore, some children of the internal node won't be accessible after deletion. In the picture below we delete 8:
Deletion starategy is the following: replace the node being deleted with the largest node in the left subtree and then delete that largest node. By symmetry, the node being deleted can be swapped with the smallest node is the right subtree.
See BST.java for a complete implementation.
Exercise. Given a sequence of numbers:
11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31
Draw a binary search tree by inserting the above numbers from left to right and then show the two trees that can be the result after the removal of 11.
 

Non-Recursive Traversals

Depth-first traversals can be easily implemented recursively.A non-recursive implementation is a bit more difficult. In this section we implement a pre-order traversal as a tree iterator

public Iterator<AnyType> iterator()
{
  return new PreOrderIterator();
}
where the PreOrderIterator class is implemented as an inner private class of the BST class

private class PreOrderIterator implements Iterator<AnyType>
{
  ...
}
The main difficulty is with next() method, which requires the implicit recursive stack implemented explicitly. We will be using Java's Stack. The algorithm starts with the root and push it on a stack. When a user calls for the next() method, we check if the top element has a left child. If it has a left child, we push that child on a stack and return a parent node. If there is no a left child, we check for a right child. If it has a right child, we push that child on a stack and return a parent node. If there is no right child, we move back up the tree (by popping up elements from a stack) until we find a node with a right child. Here is the next() implementation

public AnyType next()
{
   Node cur = stk.peek();
   if(cur.left != null)
   {
      stk.push(cur.left);
   }
   else
   {
      Node tmp = stk.pop();
      while(tmp.right == null)
      {
         if (stk.isEmpty()) return cur.data;
         tmp = stk.pop();
      }
      stk.push(tmp.right);
   }
   return cur.data;
}
The following example.shows the output and the state of the stack during each call to next(). Note, the algorithm works on any binary trees, not necessarily binary search trees..
                   
  Output       1     2     4     6     5     7     8     3  
  Stack     1   2
1
4
2
1
6
4
2
1
5
1
7
5
1
8
1
3  
A non-recursive preorder traversal can be eloquently implemented in just three lines of code. If you understand next()'s implementation above, it should be no problem to grasp this one:

public AnyType next()
{
   if (stk.isEmpty()) throw new java.util.NoSuchElementException();

   Node cur = stk.pop();
   if(cur.right != null) stk.push(cur.right);
   if(cur.left != null) stk.push(cur.left);

   return cur.data;
}
Note, we push the right child before the left child.

Level Order Traversal

Level order traversal processes the nodes level by level. It first processes the root, and then its children, then its grandchildren, and so on. Unlike the other traversal methods, a recursive version does not exist.
A traversal algorithm is similar to the non-recursive preorder traversal algorithm. The only difference is that a stack is replaced with a FIFO queue.
See BST.java for a complete implementation.


Wednesday 22 January 2014

Binary Search Tree

Binary Search Tree



Binary Search tree is a binary tree in which each internal node x stores an element such that the element stored in the left subtree of x are less than or equal to x and elements stored in the right subtree of x are greater than or equal to x. This is called binary-search-tree property. The basic operations on a binary search tree take time proportional to the height of the tree. For a complete binary tree with node n, such operations runs in (lg n) worst-case time. If the tree is a linear chain of n nodes, however, the same operations takes (n) worst-case time.



The height of the Binary Search Tree equals the number of links from the root node to the deepest node.



Implementation of  Binary Search Tree

Binary Search Tree can be implemented as a linked data structure in which each node is an object with three pointer fields. The three pointer fields left, right and p point to the nodes corresponding to the left child, right child and the parent respectively NIL in any pointer field signifies that there exists no corresponding child or parent. The root node is the only node in the BTS structure with NIL in its p field.



 

 

 

Inorder Tree Walk

During this type of walk, we visit the root of a subtree between the left subtree visit and right subtree visit.

INORDER-TREE-WALK (x)
If x NIL then
    INORDER-TREE-WALK (left[x])
    print key[x]
    INORDER-TREE-WALK (right[x])


It takes (n) time to walk a tree of n nodes. Note that the Binary Search Tree property allows us to print out all the elements in the Binary Search Tree in sorted order.

 

 

Preorder Tree Walk

In which we visit the root node before the nodes in either subtree.

PREORDER-TREE-WALK (x)
If x not equal NIL then
    PRINT key[x]
    PREORDER-TREE-WALK (left[x])
    PREORDER-TREE-WALK (right[x])

 


Postorder Tree Walk

In which we visit the root node after the nodes in its subtrees.

POSTORDER-TREE-WALk (x)
If x not equal NIL then
    POSTORDER-TREE-WALK (left[x])
    PREORDER-TREE-WALK (right[x])
    PRINT key [x]


It takes O(n) time to walk (inorder, preorder and  pastorder) a tree of n nodes.



Binary-Search-Tree property Vs Heap Property

In a heap, a nodes key is greater than equal to both of its children's keys. In binary search tree, a node's key is greater than or equal to its child's key but less than or equal to right child's key. Furthermore, this applies to entire subtree in the binary search tree case. It is very important to note that the heap property does not help print the nodes in sorted order because this property does not tell us in which subtree the next item is. If the heap property could used to print the keys (as we have shown above) in sorted order in O(n) time, this would contradict our known lower bound on comparison sorting.

The last statement implies that since sorting n elements takes Ω(n lg n) time in the worst case in the comparison model, any comparison-based algorithm for constructing a Binary Search Tree from arbitrary list n elements takes Ω(n lg n) time in the worst case.

We can show the validity of this argument (in case you are thinking of beating Ω(n lg n) bound) as follows: let c(n) be the worst-case running time for constructing a binary tree of  a set of n elements. Given an n-node BST, the inorder walk in the tree outputs the keys in sorted order (shown above). Since the worst-case running time of any computation based sorting algorithm is Ω(n lg n) , we have

                   
c(n) + O(n) = Ω(n lgn)
Therefore,                c(n) = Ω(n lgn).

 



Querying a Binary Search Tree

The most common operations performed on a BST is searching for a key stored in the tree. Other operations are MINIMUM, MAXIMUM, SUCCESSOR and PREDESESSOR. These  operations run in O(h) time where h is the height of the tree i.e., h is the number of links root node to the deepest node.

The TREE-SEARCH (x, k) algorithm searches the tree root at x for a node whose key value equals k. It returns a pointer to the node if it exists otherwise NIL

TREE-SEARCH (x, k)
if x = NIL     .OR.     k = key[x]
    then return x
if k < key[x]
    then return TREE-SEARCH (left[x], k)
    else return TREE-SEARCH (right[x], k)



Clearly, this algorithm runs in O(h) time where h is the height of the tree.

The iterative version of above algorithm is very easy to implement.

ITERATIVE-TREE-SEARCH (x, k)
  1. while x not equal NIL     .AND.     key ≠ key[x] do
  2.          if k < key [x]
  3.                  then x left[x]
  4.                  else x right [x]
  5. return x

 

The TREE-MINIMUN (x) algorithm returns a point to the node of the tree at x whose key value is the minimum of all keys in the tree. Due to BST property, an minimum element can always be found by following left child pointers from the root until NIL is uncountered.

TREE-MINIMUM (x)
while left[x] ≠ NIL do
    x left [x]
return x

Clearly, it runs in O(h) time where h is the height of the tree. Again thanks to BST property, an element in a binary search tree whose key is a maximum can always be found by following right child pointers from root until a NIL is encountered.

TREE-MAXIMUM (x)
while right[x] ≠ NIL do
    x right [x]
return x

Clearly, it runs in O(h) time where h is the height of the tree.

The TREE-SUCCESSOR (x) algorithm returns a pointer to the node in the tree whose key value is next higher than key [x].

TREE-SUCCESSOR (x)
if right [x] ≠ NIL
    then return TREE-MINIMUM (right[x])
    else y p[x]
        while y ≠ NIL     .AND.    x = right[y] do
                x  y
                y p[y]
        return y


Note that algorithm TREE-MINIMUM, TRE-MAXIMUM, TREE-SUCCESSOR, and TREE-PREDESSOR never look at the keys.

An inorder tree walk of an n-node BST can be implemented in (n)-time by finding the minimum element in the tree with TREE-MINIMUM (x) algorithm and then making n-1 calls to TREE-SUCCESSOR (x).


Another way of Implementing Inorder walk on Binary Search Tree

Algorithm
  • find the minimum element in the tree with TREE-MINIMUM
  • Make n-1 calls to TREE-SUCCESSOR

Let us show that this algorithm runs in (n) time. For a tree T, let mT be the number of edges that are traversed by the above algorithm. The running time of the algorithm for T is (mT). We make following claim:
mT is zero if T has at most one node and 2e - r otherwise, where e is
the number of edges in the tree and
r is the length of the path from
root to the node holding the maximum key.

Note that e = n - 1 for any tree with at least one node. This allows us to prove the claim by induction on e (and therefore, on n).
Base case   Suppose that e = 0. Then, either the tree is empty or consists only of a single node. So, e = r = 0. Therefore, the claim holds.
Inductive step    Suppose e > 0 and assume that the claim holds for all e' < e. Let T be a binary search tree with e edges. Let x be the root, and T1 and T2 respectively be the left and right subtree of x. Since T has at least one edge, either T1 or T2 respectively is nonempty. For each i = 1, 2, let ei be the number of edges in Ti, pi the node holding the maximum key in Ti, and ri the distance from pi to the root of Ti. Similarly, let e, p and r be the correspounding values for T. First assume that both T1 and T2 are nonempty. Then e = e1 + e2 + 2, p = p2, and r = r2 + 1. The action of the enumeration is as follows:
  • Upon being called, the minimum-tree(x) traverses the left branch of x and enters T1.
  • Once the root of T1 is visited, the edges of T1 are traversed as if T1 is the input tree. This situation will last until p1 is visisted.
  • When the Tree-Successor is called form p1. The upward path from p1 and x is traversed and x is discovered to hold the successor.
  • When the tree-Successor called from x, the right branch of x is taken.
  • Once the root of T2 is visited, the edges of T2 are traversed as if T2 is the input tree. This situation will last until p2 is reached, whereby the algorithm halts.
By the above analysis, the number of edges that are traversed by the above algorithm, mT, is
mT = 1 + (2e1 - r1) + (r1 + 1) + 1 + (2e2 - r2)
      = 2(e1 + e2 + 2) - (r2 + 1)
      = 2e -r

Therefore, the claim clearly holds for this case.

 

Next suppose that T2 is emply. Since e > 0, T1 is nonempty. Then e = e1 + 1. Since x does not have a right child, x holds the maximum. Therefore, p = x and r = 0. The action of the enumeration algorithm is the first two steps. Therefore, the number of edges that are traversed by the algorithm in question is
mT = 1 + (2e1 - r1) + ( r1 +1)
      = 2(e1 + 1) - 0
      = 2e - r

Therefore, the claim holds for this case.

Finally, assume that T1 is empty. Then T2 is nonempty. It holds that
e = e2 + 1, p = p2, and r = r2 + 1. This time x holds the minimum key and the action of the enumeration algorithm is the last two steps. Therefore, the number of edges that are traversed by the algorithm is
mT = 1 + (2e2 - r2)
      = 2(e2+1) - (r2 + 1)
      = 2e -r

Therefore, the claim holds for this case.

The claim is proven since
e = n - 1, mT    2n. On the other hand, at least one edge has to be traversed when going from on node to another, so mT    n - 1. Therefore, the running time of the above algorithm is (n).



Consider any binary search tree T and let y be the parent of a leaf z. Our goal is to show that key[y] is
either   the smallest key in T larger than key[x]
or          the largest key in the T smaller than key[x].

Proof        Suppose that x is a left child of y. Since key[y] key[x], only we have to show that there is no node z with key[y] > key[z] > key[x]. Assume, to the contrary, that there is such a z. Choose z so that it holds the smallest key among such nodes. Note for every node u z, x, key[z]  dey[u] if and only if key[x]  key[u]. If we search key[z], then the search path is identical to that of key[x] until the path rearches z or x. Since x is a leaf (meaning it has no children), the search path never reaches x. Therefore,  z is an ancestor of x. Since y is the parent of x (it is given, in case you've forgotton!) and is not z, z has to be an ancestor of y. So, key[y] > dey[z] >dey[x]. However, we are assuming key[y] > key[z] > key[x], so this is clearly impossible. Therefore, there is no such z.

The case when x is a right child of y is easy. Hint: symmetric.



INSERTION

To insert a node into a BST

  1. find a leaf st the appropriate place and
  2. connect the node to the parent of the leaf. 

TREE-INSERT (T, z)
y NIL
x root [T]
while x ≠ NIL do
    y x
    if key [z] <  key[x]
        then x left[x]
        else x right[x]
p[z] y
if y = NIL
    then root [T] z
    else if key [z] < key [y]
        then left [y] z
        else right [y] z


Like other primitive operations on search trees, this algorithm begins at the root of the tree and traces a path downward. Clearly, it runs in O(h) time on a tree of height h.

 

Sorting

We can sort a given set of n numbers by first building a binary search tree containing these number by using TREE-INSERT (x) procedure repeatedly to insert the numbers one by one and then printing the numbers by an inorder tree walk.

Analysis

Best-case running time
Printing takes O(n) time and n insertion cost O(lg n) each (tree is balanced, half the insertions are at depth lg(n) -1). This gives the best-case running time O(n lg n).
 
Worst-case running time
Printing still takes O(n) time and n insertion costing O(n) each (tree is a single chain of nodes) is O(n2). The n insertion cost 1, 2, 3, . . . n, which is arithmetic sequence so it is n2/2.

 

 

Deletion

Removing a node from a BST is a bit more complex, since we do not want to create any "holes" in the tree. If the node has one child then the child is spliced to the parent of the node. If the node has two children then its successor has no left child; copy the successor into the node and delete the successor instead TREE-DELETE (T, z) removes the node pointed to by z from the tree T. IT returns a pointer to the node removed so that the node can be put on a free-node list, etc.

TREE-DELETE (T, z)
  1. if left [z] = NIL    .OR.     right[z] = NIL
  2.     then y z
  3.     else y TREE-SUCCESSOR (z)
  4. if left [y] ≠ NIL
  5.     then x left[y]
  6.     else x right [y]
  7. if x ≠ NIL
  8.     then p[x] p[y]
  9. if p[y] = NIL
  10.     then root [T] x
  11.     else if y = left [p[y]]
  12.         then left [p[y]] x
  13.         else right [p[y]] x
  14. if y ≠ z
  15.     then key [z] key [y]
  16.         if y has other field, copy them, too
  17. return y

The procedure runs in O(h) time on a tree of height h.

Thursday 2 January 2014

JAVA Naming Conventions


Java Naming Conventions
Packages:
Names should be in lowercase. With small projects that only have a few packages it's okay to just give them simple (but meaningful!) names:

     package mycalculator


Classes:
Names should be in CamelCase. Try to use nouns because a class is normally representing something in the real world:

   class Customer
   class Account

Interfaces:
 Names should be in CamelCase. They tend to have a name that describes an operation that a class can do:

   interface Comparable
   interface Enumerable

Note that some programmers like to distinguish interfaces by beginning the name with an "I":

   interface IComparable
   interface IEnumerable

Methods:
Names should be in mixed case. Use verbs to describe what the method does:

   void calculateTax()
    string getSurname()

Variables:
Names should be in mixed case. The names should represent what the value of the variable represents:

   string firstName
   int orderNumber

Constants: Names should be in uppercase.

   static final int DEFAULT_WIDTH
   static final int MAX_HEIGHT

Private:
Camel Case and prefix private variable with a “_”