**Heap Sort**

A **heap** is really nothing more than a binary tree with some additional rules that it has to follow: first, it must always have a heap structure, where all the levels of the binary tree are filled up, from left to right, and second, it must either be ordered as a max heap or a min heap. I’ll use min heap as an example.

A **heap sort algorithm** is a sorting technique that leans on binary heap data structures. Because we know that heaps must always follow a specific order, we can leverage that property and use that to find the smallest, minimum value element, and sequentially sort elements by selecting the root node of a heap, and adding it to the end of the array.

Here is the heap-sort pseudocode:

HeapSort(A[1…n]): 1 - H = buildHeap(A[1…n]) 2 - for i = 1 to n do: 3 - A[i] = extract-min(H)

To start, we have an unsorted array. The first step is to take that array and turn it into a heap; in our case, we’ll want to turn it into a min heap. So, we have to transform and build a min heap out of our unsorted array data. Usually, this is encapsulated by a single function, which might be named something like **buildHeap**.

Here is the buildHeap pseudocode:

buildHeap(): 1 - initially H is empty 2 - for i = 1 to n do: 3 - Add(H, A[i])

Once we have our array data in a min heap format, we can be sure that the smallest value is at the root node of the heap. Remember that, even though the entire heap won’t be sorted, if we have built our min heap correctly and without any mistakes, every single parent node in our heap will be smaller in value than its children. So, we’ll move the smallest value — located at the root node — to the end of the heap by swapping it with the last element.

Now, the smallest item in the heap is located at the last node, which is great. We know that it is in its sorted position, so it can be removed from the heap completely. But, there’s still one more step: making sure that the new root node element is in the correct place! It’s highly unlikely that the item that we swapped into the root node position is in the right location, so we’ll move down the root node item down to its correct place, using a function that’s usually named something like **heapify**.

Here is the extract-min and heapify pseudocode:

extract-min(H): 1 - min = H[1] 2 - H[1] = H[H.size()] 3 - decrease H.size() by 1 4 - heapify(H, 1) 5 - return min heapify(): 1 - n = H.size() 2 - while (LeftChild(i) <= n and H[i] > H[LeftChild(i)]) or (RightChild(i) <= n and H[i] > H[RightChild(i)]) do: 3 - if (H[LeftChild(i)] < H[RightChild(i)]) then: 4 - j = LeftChild(i) 5 - else: 6 - j = RightChild(i) 7 - swap entries H[i] and H[j] 8 - i = j

And that’s basically it! The algorithm continues to repeat these steps until the heap is down to just one single node. At that point, it knows that all the elements in the unsorted array are in their sorted positions and that the last node remaining will end up being the first element in the sorted array. The total running time of this algorithm is O(n log n).

**Merge Sort**

The **merge sort algorithm** is a sorting algorithm that sorts a collection by breaking it into halves. It then sorts those two halves, and then merges them together, in order to form one, completely sorted collection. In most implementations of merge sort, it does all of this using recursion.

Merge sort is a divide and conquer algorithm which can be boiled down to 3 steps:

**Divide**and break up the problem into the smallest possible “subproblem”, of the exact same type.**Conquer**and tackle the smallest subproblems first. Once you’ve figured out a solution that works, use that exact same technique to solve the larger subproblems — in other words, solve the subproblems recursively.**Combine**the answers and build up the smaller subproblems until you finally end up applying the same solution to the larger, more complicated problem that you started off with!

The crux of divide and conquer algorithms stems from the fact that it’s a lot easier to solve a complicated problem if you can figure out how to split it up into smaller pieces. By breaking down a problem into its individual parts, the problem becomes a whole lot easier to solve. And usually, once you figure out how to solve a “subproblem”, then you can apply that exact solution to the larger problem. This methodology is exactly what makes recursion the tool of choice for divide-and-conquer algorithms, and it’s the very reason that merge sort is a great example of recursive solutions.

A mergeSort function ultimately has two functions inside of it:

a merge function, which actually combines two lists together and sorts them in the correct order.

a mergeSort function, which will continue to split the input array again and again, recursively, and will also call merge again and again, recursively.

Here is the merge sort pseudocode:

Merge(A, B): 1 - i = 1; j = 1; k = 1; 2 - a_(m+1) = ∞; b_{n+1} = ∞ 3 - while (k <= m+n) do: 4 - if (a_i < b_j) then 5 - c_k = a_i; i++; 6 - else 7 - c_k = b_j; j++; 8 - k++; 9 - return C = {c_1, c_2, …, c_(m+n)} MergeSort(X, n) 1 - if (n == 1) return X 2 - middle = n/2 (round down) 3 - A = {x_1, x_2, …, x_middle} 4 - B = {x_(middle+1), x_{middle+2), …, x_n} 5 - As = MergeSort(A, middle) 6 - Bs = MergeSort(B, n - middle) 7 - return Merge(As, Bs)

Indeed, it is because merge sort is implemented recursively that makes it faster than the other algorithms we’ve looked at thus far. Merge sort has O(n log n) running time.

**Convex Hull**

Given a set of points in the plane. the convex hull of the set is the smallest convex polygon that contains all the points of it.

**Approach 1 — Gift Wrapping O(n²)**

The idea is simple, we start from the leftmost point (or point with minimum x coordinate value) and we keep wrapping points in a counterclockwise direction. Given a point p as the current point, the next point is selected as the point that beats all other points at counterclockwise orientation. Following is the pseudocode:

1 - ConvexHull = Empty list 2 - Find point of smallest x and let it be u (:= u_original) 3 - Let L be the vertical line going through u 3 - Do: 4 - Let v be the point with the smallest angle between L and u * v 5 - Add v to ConvexHull 6 - Let L = uv line 7 - u := v 8 - Until v = u_original 9 - Output ConvexHull

**Approach 2 — Graham Scan O(n log n)**

This algorithm can be divided into 2 phases:

**Phase 1 (Sort points):**We first find the bottom-most point. The idea is to pre-process points be sorting them with respect to the bottom-most point. Once the points are sorted, they form a simple closed path. What should be the sorting criteria? Computation of actual angles would be inefficient since trigonometric functions are not simple to evaluate. The idea is to use the orientation to compare angles without actually computing them.**Phase 2 (Accept or Reject Points):**Once we have the closed path, the next step is to traverse the path and remove concave points on this path. The first two points in the sorted array are always part of Convex Hull. For remaining points, we keep track of recent three points and find the angle formed by them. Let the three points be prev(p), curr(c) and next(n). If the orientation of these points (considering them in same order) is not counterclockwise, we discard c, otherwise, we keep it.

1 - ConvexHull = Empty list 2 - Let u be the point with smallest x (if more such points, choose the one with smallest y) 3 - Sort the remaining points by by polor angle in counterclockwise order around u 4 - Add u and point 1 to ConvexHull 5 - For i = 2 to n - 1: 6 - While angle of the 2nd-to-last point, last point and i > 180 degree: 7 - Remove last point from ConvexHull 8 - Add i to ConvexHull 9 - Return ConvexHull