Lazy binomial heap

Idea:
  • be lazy, procrastinate, and leave the work for later
  • a (lazy) binomial heap is again a list of binomial trees \(B_k\)
  • when melding two heaps, just link them (note that we may get multiple trees \(B_k\) with the same order \(k\))
  • the real work is only done during extract-min, which is followed by a clean-up
  • during the clean-up, we gradually link trees of the same order until we end up with at most one \(B_k\) tree for each \(k\)
Advantages: while insert and meld now take only \(O(1)\) time, extract-min is still \(O(\log n)\) amortized
[You can download the application for offline viewing.]

Laziness

A standard binomial heap may contain only a single binomial tree of each order. This guarantees that a heap with \(n\) elements consists of at most \(\log n\) binomial trees. Every time we add a new node or merge in new trees, we need to do a cleanup and link the trees with the same order.

In lazy binomial heap, we drop this invariant. When merging two heaps, we simply link the two lists in constant time and leave the cleanup for later. This lazy merging means that if we insert \(n\) new elements, we will create a chain of \(n\) roots. We only do cleanups during the extract-min operation. When searching for a new minimum, we need to traverse the whole list of roots anyway, so we might just as well use this time to clean up.

This way merge and insert will take only constant time, while extract-min will still take \(O(\log n)\) amortized time.

merge insert extract-min
binomial heap \(O(\log n)\) \(O(\log n)\) \(O(\log n)\)
lazy binomial heap \(O(1)\) \(O(1)\) \(O(\log n)\) amortized

Analysis

We will use the accounting method of amortized analysis, so imagine we have to pay for all computation and 1$ can buy us \(O(1)\) time. We prove:

Theorem: If we get 2$ for every insert, 1$ for every merge and \(2\log n+1\)$ for every extract-min operation, we can pay for all the computation.

Invariant: Each root will have 1$ saved for cleanup in future.

Proof:

  • merge takes \(O(1)\) time - just link two lists and choose the new minimum; we can pay for it with 1$ (and preserve the invariant).
  • insert: creating and linking a new root node takes \(O(1)\) time - we pay for it with 1$ and save the other 1$ for future (to preserve the invariant).
  • for extract-min, we need to delete the min root and merge back its children (1$); each child becomes a new root, so it gets 1$ (\(\leq \log n\)$ in total); then we need to do a cleanup
  • the cleanup of \(t\) trees takes \(O(t)\) time; let \(\ell\) be the number of trees that are linked under some other trees and let \(r\) be the number of roots which remain roots even after cleanup (\(t=\ell+r\))
  • each one of the \(\ell\) roots has 1$ saved and since they will not be roots anymore after the cleanup, they are free to spend the money; they will pay for the \(\ell\) comparisons and linking operations
  • on the other hand, \(r\leq \log n\) so we can pay \(\log n\)$ from the money allocated for the operation (this includes finding the new minimum)

In the potential method, we would define potential \(\Phi(H)=\)number of binomial trees = number of roots. The extract-min operation takes \(O(t+\log n)=O(\ell+\log n)\) time. On the other hand, the change in potential is at most \(\Delta\Phi \leq \log n - \ell\). The extra \(\leq \log n\) term corresponds to the children of the deleted root becoming new roots and the \(-\ell\) term cancels out with the \(O(\ell)\) term in the actual time: \[ T_{amort} = T_{actual} + c\cdot \Delta\Phi = O(\ell + \log n) +c\cdot (\log n - \ell) = O(\log n).\]