When we teach big-Oh theory we consider functions at a very large-scale resolution. We can make sweeping, and in another context ridiculous, statements such as 3n2+5, 15n2+17, and n2 are all the "same," since they grow at the same rate as n grows very large.
It requires a recalibration of what we mean by "same." If you were to plot the functions in the previous paragraph on a graph, they would trace different lines, but would grow at the same rate (at least after a "while"). Their sameness is in how they grow.
The weird thing is that our motivation for studying big-Oh is to use these functions as bounds on the running time of programs. We try to count algorithm operations in a platform-independent manner, so that our estimation of running-time is not mired in a particular processor chip, programming langauge, or operating system. After lots of generalization all that's left is how quickly the algorithm execution time grows with the size of the input.
However, running times that differ by factors such as 15 or 3 matter a great deal. Once we're satisfied, or not concerned, with how an algorithm scales with input size, we have to worry about bottlenecks that makes the difference between, say, 15n2 and 13n2 as bounds on running time. At that point we'd get a good profiler and see whether we could shave off a statement here, a loop iteration there, and optimize our program.
Sunday, March 21, 2010
Subscribe to:
Post Comments (Atom)
1 comment:
Ah yes. Here we are with the notion of being different yet the same.
Just a question: if Big-Oh is what we use to show upper-boundedness then we could use that to say the a program's run time is going to grow maybe up to but greater than the function of Big-oh. But when we look at Worst Cases do we only use Big-Omega or do we also use both Big-Omega and Big-Oh?
Post a Comment