Heap Sort, Merge Sort, and Convex Hull
Heap Sort
A heap is really nothing more than a binary tree with some additional rules that it has to follow: first, it must always have a heap structure, where all the levels of the binary tree are filled up, from left to right, and second, it must either be ordered as a max heap or a min heap. I’ll use min heap as an example.
A heap sort algorithm is a sorting technique that leans on binary heap data structures. Because we know that heaps must always follow a specific order, we can leverage that property and use that to find the smallest, minimum value element, and sequentially sort elements by selecting the root node of a heap, and adding it to the end of the array.
Here is the heap-sort pseudocode:
To start, we have an unsorted array. The first step is to take that array and turn it into a heap; in our case, we’ll want to turn it into a min heap. So, we have to transform and build a min heap out of our unsorted array data. Usually, this is encapsulated by a single function, which might be named something like buildHeap.
Here is the buildHeap pseudocode:
Once we have our array data in a min heap format, we can be sure that the smallest value is at the root node of the heap. Remember that, even though the entire heap won’t be sorted, if we have built our min heap correctly and without any mistakes, every single parent node in our heap will be smaller in value than its children. So, we’ll move the smallest value — located at the root node — to the end of the heap by swapping it with the last element.
Now, the smallest item in the heap is located at the last node, which is great. We know that it is in its sorted position, so it can be removed from the heap completely. But, there’s still one more step: making sure that the new root node element is in the correct place! It’s highly unlikely that the item that we swapped into the root node position is in the right location, so we’ll move down the root node item down to its correct place, using a function that’s usually named something like heapify.
Here is the extract-min and heapify pseudocode:
And that’s basically it! The algorithm continues to repeat these steps until the heap is down to just one single node. At that point, it knows that all the elements in the unsorted array are in their sorted positions and that the last node remaining will end up being the first element in the sorted array. The total running time of this algorithm is O(n log n).
Merge Sort
The merge sort algorithm is a sorting algorithm that sorts a collection by breaking it into halves. It then sorts those two halves, and then merges them together, in order to form one, completely sorted collection. In most implementations of merge sort, it does all of this using recursion.
Merge sort is a divide and conquer algorithm which can be boiled down to 3 steps:
Divide and break up the problem into the smallest possible “subproblem”, of the exact same type.
Conquer and tackle the smallest subproblems first. Once you’ve figured out a solution that works, use that exact same technique to solve the larger subproblems — in other words, solve the subproblems recursively.
Combine the answers and build up the smaller subproblems until you finally end up applying the same solution to the larger, more complicated problem that you started off with!
The crux of divide and conquer algorithms stems from the fact that it’s a lot easier to solve a complicated problem if you can figure out how to split it up into smaller pieces. By breaking down a problem into its individual parts, the problem becomes a whole lot easier to solve. And usually, once you figure out how to solve a “subproblem”, then you can apply that exact solution to the larger problem. This methodology is exactly what makes recursion the tool of choice for divide-and-conquer algorithms, and it’s the very reason that merge sort is a great example of recursive solutions.
A mergeSort function ultimately has two functions inside of it:
a merge function, which actually combines two lists together and sorts them in the correct order.
a mergeSort function, which will continue to split the input array again and again, recursively, and will also call merge again and again, recursively.
Here is the merge sort pseudocode:
Indeed, it is because merge sort is implemented recursively that makes it faster than the other algorithms we’ve looked at thus far. Merge sort has O(n log n) running time.
Convex Hull
Given a set of points in the plane. the convex hull of the set is the smallest convex polygon that contains all the points of it.
Approach 1 — Gift Wrapping O(n²)
The idea is simple, we start from the leftmost point (or point with minimum x coordinate value) and we keep wrapping points in a counterclockwise direction. Given a point p as the current point, the next point is selected as the point that beats all other points at counterclockwise orientation. Following is the pseudocode:
Approach 2 — Graham Scan O(n log n)
This algorithm can be divided into 2 phases:
Phase 1 (Sort points): We first find the bottom-most point. The idea is to pre-process points be sorting them with respect to the bottom-most point. Once the points are sorted, they form a simple closed path. What should be the sorting criteria? Computation of actual angles would be inefficient since trigonometric functions are not simple to evaluate. The idea is to use the orientation to compare angles without actually computing them.
Phase 2 (Accept or Reject Points): Once we have the closed path, the next step is to traverse the path and remove concave points on this path. The first two points in the sorted array are always part of Convex Hull. For remaining points, we keep track of recent three points and find the angle formed by them. Let the three points be prev(p), curr(c) and next(n). If the orientation of these points (considering them in same order) is not counterclockwise, we discard c, otherwise, we keep it.