Sorting is a very common task in everyday life: articles of clothing in one’s drawer; collections of magazines; food stuff in the fridge; and lots of other things. Depending on the number of items there are, and how we use them, we have devised all different ways to sort: according to color, volume, date of acquisition or purchase, the like.

Similar to how lists have been adopted into computing, sorting has come to great use in computing. Today, sorting has become on of the most common computing tasks. Records in databases are usually sorted for better search efficiency. Items in binary trees are sorted accordingly so that operations can take advantage of this property.

Consequently, there has been a substantial amount of study spent on this subject and various algorithms has been devised. Some of these are obvious adaptations of natural methods of sorting utilized by humans in everyday life. While some are totally different, created for the purpose of sorting items which number in thousands or even millions. It should be noted, though, that there are still problems related to sorting that remain unsolved. New algorithms are constantly being developed and old ones improved.

Many useful sorting algorithms devised thus far take advantage of the divide-and-conquer approach to solving problems. Depending on the algorithm, a different way of division is used. For example, `mergesort`

divides a collection in half. `Quicksort`

divides a collection into big and small values while `radix sort`

divides the problem by working on one digit of the key at a time.

There are lots of other useful sorting algorithms that have been created. Some of them work even together for better efficiency and performance. For instance, `shellsort`

and `quicksort`

both take advantage of the best case behavior of `insertion sort`

to gain processing speed.

The most direct approach to comparing two sorting algorithms would be to program both and measure their running time. However, this can be misleading and might put a bias towards one of two depending on the input values especially when using input values favorable to this particular algorithm. Each algorithms has its own strength and one works better than the other depending on the data that needs processing. One is more efficient than the other depending on the application.

Any considerable discussion of any one of these sorting algorithms will require an article of their own. Which is why, the next few articles will each be dedicated to some of these sorting algorithms. We start with insertion sort. This takes *Θ(n ^{2})* running time but it is easy to understand and implement; bubble sort and selection sort also take such a long run time. Nevertheless, they are occasionally the best tools for use depending on the situation.

From Clifford A. Shaffer’s *A Practical Introduction to Data Structures and Algorithm Analysis Edition 3.1 (Java Version)*