We talked about different types of programming language statements – like assignments, ifs, and loops – as well as putting statements into functions that perform a computation, like calculating an exponent.
Some algorithms are better than others even if they produce equal results.
Generally, the fewer steps it takes to compute, the better it is, though sometimes we care about other factors, like how much memory it uses.
The term algorithm comes from Persian polymath Muḥammad ibn Mūsā al-Khwārizmī who was one of the fathers of algebra more than a millennium ago.
The crafting of efficient algorithms – a problem that existed long before modern computers
– led to a whole science surrounding computation, which evolved into the modern discipline of… you guessed it!
One of the most storied algorithmic problems in all of computer science is sorting… as in sorting names or sorting numbers.
Computers sort all the time.
Looking for the cheapest airfare, arranging your email by most recently sent, or scrolling your contacts by last name -- those all require sorting.
Computer Scientists have spent decades inventing algorithms for sorting, with cool names like
Bubble Sort and Spaghetti Sort.
It’s called Selection Sort -- and it’s pretty basic.
Here’s the pseudo-code.
This function can be used to sort 8, 80, or 80 million numbers - and once you've written the function, you can use it over and over again.
With this sort algorithm, we loop through each position in the array, from top to bottom, and then for each of those positions, we have to loop through the array to find the smallest number to swap.
You can see this in the code, where one FOR loop is nested inside of another FOR loop.
This means, very roughly, that if we want to sort N items, we have to loop N times, inside of which, we loop N times, for a grand total of roughly N times N loops...
Or N squared.
This relationship of input size to the number of steps the algorithm takes to run characterizes the complexity of the Selection Sort algorithm.
It gives you an approximation of how fast, or slow, an algorithm is going to be.
Computer Scientists write this order of growth in something known as – no joke – “big O notation”.
N squared is not particularly efficient.
That’s a big problem for a company like Google, which has to sort arrays with millions or billions of entries.
So, you might ask, as a burgeoning computer scientist, is there a more efficient sorting algorithm?
Now we are ready to merge, which is how “merge sort” gets its name.
Splitting in half repeatedly like this has a logarithmic relationship with the number of items - trust me!
For this reason, merge sort is much more efficient than selection sort.
graph search!
The simplest approach would just be to try every single path exhaustively and calculate the total cost of each.
That’s a brute force approach.
The classic algorithmic solution to this graph problem was invented by one of the greatest minds in computer science practice and theory, Edsger Dijkstra, so it’s appropriately named Dijkstra's algorithm.
Dijkstra's original algorithm, conceived in 1956, had a complexity of the number of nodes in the graph squared.
Every time you use a service like Google Maps to find directions, an algorithm much like
Dijkstra's is running on servers to figure out the best route for you.
Algorithms are everywhere and the modern world would not be possible without them.