When learning programming, I wouldn’t be surprised that many students have questioned how succinct or how meticulously detailed each line of code they should write. How many
while loops and conditionals can — and should — they utilize to solve a problem or algorithm. What sort of instructions should I, as the developer, give to the computer and have it return to me something valuable in a reasonable amount of time?
One of my favorite terms for code not written, uh, particularly cleanly or modulated is
spaghetti code. Just code that references each other or itself in various, fun ways that 1) makes it harder to read and understand and 2) possibly significantly increase its processing time. For new programmers, it’s easy to write code that solves a problem.
It’s much more difficult to write code that solves problems gracefully — something human readable and requires minimal processing power.
In mathematics (please don’t groan yet), we learned about polynomials and how to graph them — refer to the graph below.
Big O notation refers to a classification of algorithms, dependent on their run time and their complexity. The more the resources required, the higher the complexity and computation required to run the algorithm.
In very simple terms, we can write two different algorithms that solve the exact problem could be acceptable. A computer can read a lines of code step by step. But from an efficiency standpoint, if one algorithm takes 1000 steps to run, wouldn’t you choose the one that takes 200 steps? For programmers that make their first application, this might not be a big issue as the input and expected computation to produce an output is likely insignificant. However, when it comes to bigger applications requiring tens of thousands of lines of code, this can have a drastic effect overall.
A popular algorithm that newer programmers find themselves solving are search algorithms. Given a array of names in alphabetical order, we could have a program search the array from the beginning to find a particular name. Given the best case scenario, maybe the very first element is satisfactory. Woohoo! Only took one step. In the worst case scenario, it might look through the entire array and find that the goal was the very last element. Yikes. Given a very long array (hundreds, thousands of elements long), isn’t there a better way to deal with this problem? By the way, the worst-case performance for this algorithm is O(n), or linear (see graph below).
But what if we could split the array in half and start from the middle? When we grab the middle element, the position of the element we’re looking for could 1) be ahead or 2) before the middle element, or 3) we got the correct element. Say the element is ahead, we know that the first half of the sorted array does not contain the target element, so lets ignore it and take the midpoint element of the second half of the array. Is it positioned ahead or before our target element? Or is it the target element? We keep repeating this, and eventually find our target element. This takes considerably less computing power, and the worst-case performance is O(log n), or logarithmic. Visually, it’s clear from the graph that this takes the cake.
Writing any code that can solve a problem, in my opinion, is easy. Writing code with computer science fundamentals is much more sought after. All it takes is practice and knowledge, and the desire to think outside the box every now and then.