Algorithm Complexity

The complexity of an algorithm is a measure of the amount of work required to complete the algorithm in the worst case scenario. When we are looking at networks, the worst case is generally given by a complete network. Remember that a complete network has every node directly connected to every other node in the network.

The degree of complexity of an algorithm tells you how much extra work would need to be done if the problem were to be increased in size.

Constant complexity, $O(1)$, means that the amount of work is constant, and is independent of the size of the problem.

Linear complexity, $O(n)$, means that if the problem doubles in size, there will be double the work to do.

Quadratic complexity, $O(n^2)$, indicates that there will be four times the amount of work if the size of the problem is doubled.

Exponential, logarithmic and many other degrees of complexity are possible based on the size and runtime of the algorithm.

Kruskal’s Complexity

In Kruskal’s algorithm we need to find edge with the lowest weight at each step. Taking the worst case of a complete network there will be $_nC^2$ edges, which is equal to $\frac{1}{2}[n(n + 1)]$ possible edges. Inspecting each edge to determine the one with the lowest weight is the most time consuming job so the number of inspections to complete the problem will give an indication of the complexity of the algorithm.

In the first round of inspections, all of the edges need to be inspected and compared. There will be $\frac{1}{2}[n(n + 1)] - 1$ comparisons to make.

In the second round of inspections, all of the edges, except the one already selected, need to be inspected and compared. There will be $\frac{1}{2}[n(n + 1)] - 2$ comparisons to make.

In a spanning tree for a network with $n$ nodes, there will be $(n - 1)$ edges that will have to be selected. This would require the following number of comparisons to be made in total:
${\frac{1}{2}[n(n + 1)] - 1} + {\frac{1}{2}[n(n + 1)] - 2} + {\frac{1}{2}[n(n + 1)] - 3} + \dotsc + {\frac{1}{2}[n(n + 1)] - (n - 1)}$

After messing about with all of the algebra you discover that this can also be written as: $\frac{1}{2}n(n - 1)(n - 2) = \frac{1}{2}(n^3 - 3n^2 + 2n)$. It follows that Kruskal’s algorithm has cubic complexity.

Prim’s Complexity

Prim’s algorithm starts by selecting the least weight edge from one node. In a complete network there are $(n - 1)$ edges from each node.

At step 1 this means that there are $(n - 1) - 1$ comparisons to make.

Having made the first comparison and selection there are $(n - 2)$ unconnected nodes, with a edges joining from each of the two nodes already selected. This means that there are $(n - 2) - 1$ comparisons that need to be made.

Carrying on this argument results in the following expression for the number of comparisons that need to be made to complete the minimum spanning tree: $((n - 1) - 1) + (2(n - 2) - 1) + (3(n - 3) - 1) + \dotsc + ((n - 1)(n - (n - 1)) - 1)$

The result is that Prim’s algorithm has cubic complexity.

Dijkstra’s Complexity

Dijkstra’s algorithm inspects each of the nodes that have not yet been permanently labelled, so the work per step is approximately proportional to the number of remaining nodes. For the entire process this number is:
$(n - 1) + (n - 2) + (n - 3) + \dotsc + 1 = \frac{1}{2}n(n - 1)$.