# The master method for algorithm analysis

The big Ο, big Ω and big Θ notations give an estimate of the complexity of an algorithm or how its running time scales with the increase of the input size of the problem. A fast algorithm is one whose worst-case running time grows slowly, which can be seen from the magnitude of the highest order term under the abstraction of constants and lower order terms. If we have an input array of size n and we loop through all elements to access them individually and do (lets say) three simple (arithmetic) operations with each element, we'll have an algorithm with O(3n). If we do in addition two other arithmetic operations outside the loop, we'll come to O(3n+2). But we can say that this is approximately O(n), because the size of n will affect the running time most, especially with large inputs. This simplification is for the sake of comparison with equivalent algorithms in search of the most effective one. But it's still not correct to think that algorithms with O(3n+2) and O(2n+3) will run in the same time, only that the difference between them won't be very significant. In a similar way, we could use two nested loops to access the elements of a two-dimensional matrix. The one loop would go through the rows and the other through the columns of the matrix. Then we'll have O(n2), which is strictly more complex relative to the running time of different sorting algorithms (O(nlog n)), which in itself is more complex than what we saw before (O(n)). When we compare algorithms, it is best when they produce the same desired output, otherwise we'll be comparing totally unrelated things. Sometimes we may need to combine multiple different algorithms to come to a more specific and effective solution. Algorithm effectiveness must be bound to the output of the problem we are trying to solve. The bigger the complexity, the longer we'll need to compute the result and the less the users of our application/website will be willing to wait for it. A more complex problem can still be solved in a reasonable time, but with increased computing power. Researchers found a way to solve the travelling salesman problem (finding the shortest route that visits each city once and returns to the starting point) for Sweden, but they initially esimated that they would need over 80 processor-years and only the fact that they were able to dedicate a cluster of almost that many processors allowed them to solve the problem in approximately a year. And the area of the country is not among the largest.

Maths can be helpful with algorithms not only as a means to approximate running time. Mathematical induction can help us prove that if a certain formula is valid for a base case (n=1) and for the case in which the input grows with only one element (n=2), it is valid for input of any size. We can use the derived formula from the end result to reduce the need for future computation (think of the geometric series as an example). This way, we no longer need to spend unnecessary processor cycles to loop through many elements, but could instead implement the short formula to obtain the result much faster. In this sense, proper use of maths can be seen as a useful tool to reduce the complexity of a problem.

Let's see another way we could use maths to evaluate algorithms—the master method. If we have a problem that we are trying to recursively reduce to smaller subproblems of equal size that we eventually know how to solve, we can determine the running time of an algorithm according to the following:

Lets say that a = 4, b = 2 and d = 2. Then we would be in case 1, which means that the running time will be O(n^{2}log n). If a =3, b = 2 and d = 2, we would be in case 2, since 3 < 2^{2}, so we have O(n^{2}). And finally, if a = 3, b = 2 and d = 1, we would be in case 3 since 3 > 2^{1} , so we have O(n^{1.58}). We can think of a as the rate with which new subproblems appear and of bd as the rate with which the work per subproblem shrinks. Thus, we want, if possible to be in case 2—to have the work shrink faster than it grows. If a = b^{d}, then the amount of work is equal at every recursion level. This method is interesting, because it opposes these two forces against each other and depending on which one is stronger, we get different running time. Again, this method assumes that every subproblem solved is of equal size and if that's not true, it can't be used.