Sorting algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m rv vandal
(20 intermediate revisions by 13 users not shown)
Line 9: Line 9:
Sorting algorithms used in [[computer science]] are often classified by:
Sorting algorithms used in [[computer science]] are often classified by:


* Computational [[Computational complexity theory|complexity]] ([[Worst-case performance|worst]], [[Average performance|average]] and [[Best-case performance|best]] behaviour) of element comparisons in terms of the size of the list (''n''). For typical sorting algorithms good behavior is [[Big O notation|O]](''n'' log ''n'') and bad behavior is Ω(''n''²). (See [[Big O notation]]) Ideal behavior for a sort is O(''n''). Sort algorithms which only use an abstract key comparison operation always need Ω(''n'' log ''n'') comparisons in the worst case.
* Computational [[Computational complexity theory|complexity]] ([[Worst-case performance|worst]], [[Average performance|average]] and [[Best-case performance|best]] behaviour) of element comparisons in terms of the size of the list <math>\left( n \right)</math>. For typical sorting algorithms good behavior is [[Big O notation|<math>\mathcal{O}</math>]]<math>\left( n \log n\right)</math> and bad behavior is <math>\Omega\left( n^2 \right)</math>. (See [[Big O notation]]) Ideal behavior for a sort is <math>\mathcal{O}\left( n \right)</math>. Sort algorithms which only use an abstract key comparison operation always need <math>\Omega\left( n \log n\right)</math> comparisons in the worst case.
* Computational [[Computational complexity theory|complexity]] of swaps (for "in place" algorithms).
* Computational [[Computational complexity theory|complexity]] of swaps (for "in place" algorithms).
* Memory usage (and use of other computer resources). In particular, some sorting algorithms are "[[In-place algorithm|in place]]", such that only O(1) or O(log n) memory is needed beyond the items being sorted, while others need to create auxiliary locations for data to be temporarily stored.
* Memory usage (and use of other computer resources). In particular, some sorting algorithms are "[[In-place algorithm|in place]]", such that only <math>\mathcal{O}(1)</math> or <math>\mathcal{O}(\log n)</math> memory is needed beyond the items being sorted, while others need to create auxiliary locations for data to be temporarily stored.
* Recursion. Some algorithms are either recursive or non recursive, while others may be both (e.g., merge sort).
* Recursion. Some algorithms are either recursive or non recursive, while others may be both (e.g., merge sort).
* Stability: '''stable sorting algorithms''' maintain the relative order of records with equal keys (i.e., values). See below for more information.
* Stability: '''stable sorting algorithms''' maintain the relative order of records with equal keys (i.e., values). See below for more information.
Line 23: Line 23:
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first component:
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first component:


(4, 1) (3, 7) (3, 1) (5, 6)
(4, 2) (3, 7) (3, 1) (5, 6)


In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:
In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:


(3, 7) (3, 1) (4, 1) (5, 6) (order maintained)
(3, 7) (3, 1) (4, 2) (5, 6) (order maintained)
(3, 1) (3, 7) (4, 1) (5, 6) (order changed)
(3, 1) (3, 7) (4, 2) (5, 6) (order changed)


Unstable sorting algorithms may change the relative order of records with equal keys, but stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space cost.
Unstable sorting algorithms may change the relative order of records with equal keys, but stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space cost.
Line 36: Line 36:
Example: sorting pairs of numbers as above by first, then second component:
Example: sorting pairs of numbers as above by first, then second component:


(4, 1) (3, 7) (3, 1) (4, 6) (original)
(4, 2) (3, 7) (3, 1) (4, 6) (original)


(4, 1) (3, 1) (4, 6) (3, 7) (after sorting by second component)
(4, 2) (3, 1) (4, 6) (3, 7) (after sorting by second component)
(3, 1) (3, 7) (4, 1) (4, 6) (after sorting by first component)
(3, 1) (3, 7) (4, 2) (4, 6) (after sorting by first component)


On the other hand:
On the other hand:


(3, 7) (3, 1) (4, 1) (4, 6) (after sorting by first component)
(3, 7) (3, 1) (4, 2) (4, 6) (after sorting by first component)
(3, 1) (4, 1) (4, 6) (3, 7) (after sorting by second component,
(3, 1) (4, 2) (4, 6) (3, 7) (after sorting by second component,
order by first component is disrupted)
order by first component is disrupted)


Line 55: Line 55:
|- align="center"
|- align="center"
|[[Bubble sort]]
|[[Bubble sort]]
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
||&mdash;
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Exchanging
|| Exchanging
|nowrap align=left|
||


|- align="center"
|- align="center"
|[[Cocktail sort]]
|[[Cocktail sort]]
||&mdash;
||&mdash;
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Exchanging
|| Exchanging
|nowrap align=left|
||


|- align="center"
|- align="center"
Line 75: Line 75:
||&mdash;
||&mdash;
||&mdash;
||&mdash;
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Exchanging
|| Exchanging
Line 83: Line 83:
|[[Gnome sort]]
|[[Gnome sort]]
||&mdash;
||&mdash;
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Exchanging
|| Exchanging
Line 91: Line 91:
|- align="center"
|- align="center"
|[[Selection sort]]
|[[Selection sort]]
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Selection
|| Selection
Line 100: Line 100:
|- align="center"
|- align="center"
|[[Insertion sort]]
|[[Insertion sort]]
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Insertion
|| Insertion
|nowrap align=left| Average case is also O(''n'' + ''d''), where ''d'' is the number of [[Permutation_groups#Transpositions.2C_simple_transpositions.2C_inversions_and_sorting|inversions]]
|nowrap align=left| Average case is also <math>\mathcal{O}\left( n + d \right)</math>, where ''d'' is the number of [[Permutation_groups#Transpositions.2C_simple_transpositions.2C_inversions_and_sorting|inversions]]

|- align="center"
|- align="center"
|[[Shell sort]]
|[[Shell sort]]
||&mdash;
||&mdash;
|style="background:#ffdddd"|O(''n''&nbsp;log²&nbsp;''n'')
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \log^2 n \right)</math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Insertion
|| Insertion
|nowrap align=left|
||


|- align="center"
|- align="center"
|[[Binary tree sort]]
|[[Binary tree sort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Insertion
|| Insertion
Line 126: Line 127:
|- align="center"
|- align="center"
|[[Library sort]]
|[[Library sort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Insertion
|| Insertion
|nowrap align=left|
||


|- align="center"
|- align="center"
|[[Merge sort]]
|[[Merge sort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Merging
|| Merging
|nowrap align=left|
||


|- align="center"
|- align="center"
|nowrap|[[In-place]] [[merge sort]]
|nowrap|[[In-place]] [[merge sort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Merging
|| Merging
|nowrap align="left"| Example implementation here: [http://citeseer.ist.psu.edu/472101.html]
|nowrap align="left"| Example implementation here: [http://citeseer.ist.psu.edu/472101.html]



|- align="center"
|- align="center"
|[[Heapsort]]
|[[Heapsort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Selection
|| Selection
|nowrap align=left|
||


|- align="center"
|- align="center"
|[[Smoothsort]]
|[[Smoothsort]]
||&mdash;
||&mdash;
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(1)
|style="background:#ddffdd"|<math>\mathcal{O}\left( 1 \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Selection
|| Selection
|nowrap align=left|
||


|- align="center"
|- align="center"
|[[Quicksort]]
|[[Quicksort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O}\left( n^2 \right)</math>
|style="background:#ffffdd"|<math>\mathcal{O}\left( \log n \right)</math>
||O(log ''n'')
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Partitioning
|| Partitioning
|nowrap align="left"| [[Naïve algorithm|Naïve]] variants use O(''n'') space; can be O(''n'' log ''n'') worst case if median pivot is used
|nowrap align="left"| [[Naïve algorithm|Naïve]] variants use <math> \mathcal{O} \left( n \right) </math> space; can be <math>\mathcal{O}\left( n \log n \right)</math> worst case if median pivot is used


|- align="center"
|- align="center"
|[[Introsort]]
|[[Introsort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffffdd"|<math>\mathcal{O}\left( \log n \right)</math>
||O(log ''n'')
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Hybrid
|| Hybrid
Line 191: Line 191:
|[[Patience sorting]]
|[[Patience sorting]]
||&mdash;
||&mdash;
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Insertion
|| Insertion
|align="left"| Finds all the [[longest increasing subsequence]]s within O(''n'' log ''n'')
|nowrap align=left| Finds all the [[longest increasing subsequence]]s within O(''n'' log ''n'')


|- align="center"
|- align="center"
|[[Strand sort]]
|[[Strand sort]]
|style="background:#ddffdd"|O(''n''&nbsp;log&nbsp;''n'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \log n \right)</math>
|style="background:#ffdddd"|O(''n''²)
|style="background:#ffdddd"|<math> \mathcal{O} \left( n^2 \right) </math>
|style="background:#ffdddd"|<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Selection
|| Selection
|align="left"|
|nowrap align=left|
|}
|}


The following table describes sorting algorithms that are not [[comparison sort]]s. As such, they are not limited by a O(''n'' log ''n'') lower bound. Complexities below are in terms of ''n'', the number of items to be sorted, ''k'', the size of each key, and ''s'', the chunk size used by the implementation. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that ''n'' << 2<sup>''k''</sup>, where << means "much less than."
The following table describes sorting algorithms that are not [[comparison sort]]s. As such, they are not limited by a <math>\mathcal{O}\left( n \log n \right)</math> lower bound. Complexities below are in terms of ''n'', the number of items to be sorted, ''k'', the size of each key, and ''s'', the chunk size used by the implementation. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that ''n'' << 2<sup>''k''</sup>, where << means "much less than."


{|class="wikitable sortable"
{|class="wikitable sortable"
Line 214: Line 214:
|- align="center"
|- align="center"
|[[Pigeonhole sort]]
|[[Pigeonhole sort]]
|style="background:#ddffdd"|O(''n''+2<sup>''k''</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n + 2^k \right)</math>
|style="background:#ddffdd"|O(''n''+2<sup>''k''</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n + 2^k \right)</math>
||<math>\mathcal{O}\left( 2^k \right)</math>
||O(2<sup>''k''</sup>)
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Yes
|| Yes
Line 223: Line 223:
|- align="center"
|- align="center"
|[[Bucket sort]]
|[[Bucket sort]]
|style="background:#ddffdd"|O(''n''·''k'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot k \right)</math>
|style="background:#ffdddd"|O(''n''²·''k'')
|style="background:#ffdddd"|<math>\mathcal{O}\left( n^2 \cdot k \right)</math>
||<math>\mathcal{O}\left( n \cdot k \right)</math>
||O(''n''·''k'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| No
|| No
Line 232: Line 232:
|- align="center"
|- align="center"
|[[Counting sort]]
|[[Counting sort]]
|style="background:#ddffdd"|O(''n''+2<sup>''k''</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n + 2^k \right)</math>
|style="background:#ddffdd"|O(''n''+2<sup>''k''</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n + 2^k \right)</math>
||<math>\mathcal{O}\left( n + 2^k \right)</math>
||O(''n''+2<sup>''k''</sup>)
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| Yes
|| Yes
Line 241: Line 241:
|- align="center"
|- align="center"
|LSD [[Radix sort]]
|LSD [[Radix sort]]
|style="background:#ddffdd"|O(''n''·''k''/''s'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot \frac{k}{s} \right)</math>
|style="background:#ddffdd"|O(''n''·''k''/''s'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot \frac{k}{s} \right)</math>
||<math>\mathcal{O}\left( n \right)</math>
||O(''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| No
|| No
Line 250: Line 250:
|- align="center"
|- align="center"
|MSD [[Radix sort]]
|MSD [[Radix sort]]
|style="background:#ddffdd"|O(''n''·''k''/''s'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot \frac{k}{s} \right)</math>
|style="background:#ddffdd"|O(''n''·(''k''/''s'')·2<sup>''s''</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot \frac{k}{s} \cdot 2^s \right)</math>
||<math>\mathcal{O}\left( \frac{k}{s} \cdot 2^s \right)</math>
||O((''k''/''s'')·2<sup>''s''</sup>)
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| No
|| No
Line 259: Line 259:
|- align="center"
|- align="center"
|[[Spreadsort]]
|[[Spreadsort]]
|style="background:#ddffdd"|O(''n''·''k''/''s'')
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot \frac{k}{s} \right)</math>
|style="background:#ddffdd"|O(''n''·(''k'' - log(''n''))<sup>.5</sup>)
|style="background:#ddffdd"|<math>\mathcal{O}\left( n \cdot ( k - \log (n) )^5 \right)</math>
||<math>\mathcal{O}\left( \frac{k}{s} \cdot 2^s \right)</math>
||O((''k''/''s'')·2<sup>''s''</sup>)
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| No
|| No
Line 274: Line 274:
|- align="center"
|- align="center"
|[[Bogosort]]
|[[Bogosort]]
|nowrap|O(''n'' &times; ''n''!)
|nowrap|<math>\mathcal{O}\left( n \cdot n! \right)</math>
|nowrap|<math>\mathcal{O}\left( \infty \right)</math>
|nowrap|∞
||<math>\mathcal{O}\left( 1 \right)</math>
||O(1)
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Yes
|| Yes
Line 283: Line 283:
|- align="center"
|- align="center"
|[[Bogosort#Bozo sort|Bozo sort]]
|[[Bogosort#Bozo sort|Bozo sort]]
|nowrap|O(''n'' &times; ''n''!)
|nowrap|<math>\mathcal{O}\left( n \cdot n! \right)</math>
|nowrap|<math>\mathcal{O}\left( \infty \right)</math>
|nowrap|∞
||<math>\mathcal{O}\left( 1 \right)</math>
||O(1)
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Yes
|| Yes
Line 292: Line 292:
|- align="center"
|- align="center"
|[[Stooge sort]]
|[[Stooge sort]]
|nowrap|O(''n''<sup>2.71</sup>)
|nowrap|<math>\mathcal{O}\left( n^{2.71} \right)</math>
|nowrap|O(''n''<sup>2.71</sup>)
|nowrap|<math>\mathcal{O}\left( n^{2.71} \right)</math>
|<math>\mathcal{O}\left( \log n \right)</math>
||O(log ''n'')
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Yes
|| Yes
Line 310: Line 310:
|- align="center"
|- align="center"
|[[Pancake sorting|Simple pancake sort]]
|[[Pancake sorting|Simple pancake sort]]
|| <math>\mathcal{O}\left( n \right)</math>
|| O(''n'')
|| <math>\mathcal{O}\left( n \right)</math>
|| O(''n'')
|| O(log ''n'')
|| <math>\mathcal{O}\left( \log n \right)</math>
|style="background:#ffdddd"| No
|style="background:#ffdddd"| No
|| Yes
|| Yes
Line 319: Line 319:
|- align="center"
|- align="center"
|[[Sorting network]]s
|[[Sorting network]]s
|nowrap|O(log ''n'')
|nowrap|<math>\mathcal{O}\left( \log n \right)</math>
|nowrap|O(log ''n'')
|nowrap|<math>\mathcal{O}\left( \log n \right)</math>
|| <math>\mathcal{O}\left( n \cdot \log (n) \right)</math>
|| O(''n''•log ''n'')
|style="background:#ddffdd"| Yes
|style="background:#ddffdd"| Yes
|| No
|| No
|nowrap align="left"| Requires a custom circuit of size O(''n''•log ''n'')
|nowrap align="left"| Requires a custom circuit of size <math>\mathcal{O}\left( n \cdot \log (n) \right)</math>
|}
|}


Additionally, theoretical computer scientists have detailed other sorting algorithms that provide better than O(n log n) time complexity with additional constraints, including:
Additionally, theoretical computer scientists have detailed other sorting algorithms that provide better than <math>\mathcal{O}\left( n \log n \right)</math> time complexity with additional constraints, including:


* Han's algorithm, a deterministic algorithm for sorting keys from a [[domain]] of finite size, taking O(n log log n) time and O(n) space.<ref>Y. Han. ''Deterministic sorting in O(nlog log n) time and linear space''. Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, Montreal, Quebec, Canada, 2002,p.602-608. </ref>
* Han's algorithm, a deterministic algorithm for sorting keys from a [[domain]] of finite size, taking <math>\mathcal{O}\left( n \log \log n \right)</math> time and <math>\mathcal{O}\left( n \right)</math> space.<ref>Y. Han. ''Deterministic sorting in <math>\mathcal{O}\left( n \log \log n \right)</math> time and linear space''. Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, Montreal, Quebec, Canada, 2002,p.602-608. </ref>
* Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking O(n log log n) time and O(n) space.<ref>M. Thorup. ''Randomized Sorting in O(n log log n) Time and Linear Space Using Addition, Shift, and Bit-wise Boolean Operations''. Journal of Algorithms, Volume 42, Number 2, February 2002 , pp. 205-230(26)</ref>
* Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking <math>\mathcal{O}\left( n \log \log n \right)</math> time and <math>\mathcal{O}\left( n \right)</math> space.<ref>M. Thorup. ''Randomized Sorting in <math>\mathcal{O}\left( n \log \log n \right)</math> Time and Linear Space Using Addition, Shift, and Bit-wise Boolean Operations''. Journal of Algorithms, Volume 42, Number 2, February 2002 , pp. 205-230(26)</ref>
* An [[integer]] sorting algorithm taking O(n √(log log n)) time and O(n) space.<ref>Y. Han, M. Thorup, ''Integer Sorting in O(n √(log log n) Time and Linear Space.'' Proceedings of the 43rd Symposium on Foundations of Computer Science, 2002, p. 135-144.</ref>
* An [[integer]] sorting algorithm taking <math>\mathcal{O}\left( n \sqrt{\log \log n} \right)</math> time and <math>\mathcal{O}\left( n \right)</math> space.<ref>Y. Han, M. Thorup, ''Integer Sorting in <math>\mathcal{O}\left( n \sqrt{\log \log n} \right)</math> Time and Linear Space.'' Proceedings of the 43rd Symposium on Foundations of Computer Science, 2002, p. 135-144.</ref>


While theoretically interesting, to date these algorithms have seen little use in practice.
While theoretically interesting, to date these algorithms have seen little use in practice.
Line 338: Line 338:
=== Bubble sort ===
=== Bubble sort ===
{{main|Bubble sort}}
{{main|Bubble sort}}
''Bubble sort'' is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education. A slightly better variant, [[cocktail sort]], works by inverting the ordering criteria and the pass direction on alternating passes.
''Bubble sort'' is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education.For example if we got 100 elements then the total number of comparisions will be 10000!. A slightly better variant, [[cocktail sort]], works by inverting the ordering criteria and the pass direction on alternating passes.
Its average case and worst case are both O(''n''²).
Its average case and worst case are both O(''n''²).



Revision as of 18:49, 5 August 2008

In computer science and mathematics, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions:

  1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
  2. The output is a permutation, or reordering, of the input.

Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds.

Classification

Sorting algorithms used in computer science are often classified by:

  • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is . (See Big O notation) Ideal behavior for a sort is . Sort algorithms which only use an abstract key comparison operation always need comparisons in the worst case.
  • Computational complexity of swaps (for "in place" algorithms).
  • Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in place", such that only or memory is needed beyond the items being sorted, while others need to create auxiliary locations for data to be temporarily stored.
  • Recursion. Some algorithms are either recursive or non recursive, while others may be both (e.g., merge sort).
  • Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values). See below for more information.
  • Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
  • General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort.

Stability

Stable sorting algorithms maintain the relative order of {{#invoke:Strict weak ordering|records with equal keys}} (i.e., sort key values). That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.

When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first component:

(4, 2)  (3, 7)  (3, 1)  (5, 6)

In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:

(3, 7)  (3, 1)  (4, 2)  (5, 6)   (order maintained)
(3, 1)  (3, 7)  (4, 2)  (5, 6)   (order changed)

Unstable sorting algorithms may change the relative order of records with equal keys, but stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space cost.

Sorting based on a primary, secondary, tertiary, etc. sort key can be done by any sorting method, taking all sort keys into account in comparisons (in other words, using a single composite sort key). If a sorting method is stable, it is also possible to sort multiple times, each time with one sort key. In that case the keys need to be applied in order of increasing priority.

Example: sorting pairs of numbers as above by first, then second component:

(4, 2)  (3, 7)  (3, 1)  (4, 6) (original)
(4, 2)  (3, 1)  (4, 6)  (3, 7) (after sorting by second component)
(3, 1)  (3, 7)  (4, 2)  (4, 6) (after sorting by first component)

On the other hand:

(3, 7)  (3, 1)  (4, 2)  (4, 6) (after sorting by first component)
(3, 1)  (4, 2)  (4, 6)  (3, 7) (after sorting by second component, 
                                order by first component is disrupted)

List of sorting algorithms

In this table, n is the number of records to be sorted. The columns "Average" and "Worst" give the time complexity in each case, under the assumption that the length of each key is constant, and that therefore all comparisons, swaps, and other needed operations can proceed in constant time. "Memory" denotes the amount of auxiliary storage needed beyond that used by the list itself, under the same assumption. These are all comparison sorts.

Name Average Worst Memory Stable Method Other notes
Bubble sort Yes Exchanging
Cocktail sort Yes Exchanging
Comb sort No Exchanging Small code size
Gnome sort Yes Exchanging Tiny code size
Selection sort No Selection Can be implemented as a stable sort
Insertion sort Yes Insertion Average case is also , where d is the number of inversions
Shell sort No Insertion
Binary tree sort Yes Insertion When using a self-balancing binary search tree
Library sort Yes Insertion
Merge sort Yes Merging
In-place merge sort No Merging Example implementation here: [1]
Heapsort No Selection
Smoothsort No Selection
Quicksort No Partitioning Naïve variants use space; can be worst case if median pivot is used
Introsort No Hybrid used in most implementations of STL [citation needed]
Patience sorting No Insertion Finds all the longest increasing subsequences within O(n log n)
Strand sort Yes Selection

The following table describes sorting algorithms that are not comparison sorts. As such, they are not limited by a lower bound. Complexities below are in terms of n, the number of items to be sorted, k, the size of each key, and s, the chunk size used by the implementation. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n << 2k, where << means "much less than."

Name Average Worst Memory Stable n << 2k Notes
Pigeonhole sort Yes Yes
Bucket sort Yes No Assumes uniform distribution of elements from the domain in the array.
Counting sort Yes Yes
LSD Radix sort Yes No
MSD Radix sort No No
Spreadsort No No Asymptotics are based on the assumption that n << 2k, but the algorithm does not require this.

The following table describes some sorting algorithms that are impractical for real-life use due to extremely poor performance or a requirement for specialized hardware.

Name Average Worst Memory Stable Comparison Other notes
Bogosort No Yes Average time using Fisher-Yates shuffle
Bozo sort No Yes Average time is asymptotically half that of bogosort
Stooge sort No Yes
Bead sort N/A N/A N/A No Requires specialized hardware
Simple pancake sort No Yes Count is number of flips.
Sorting networks Yes No Requires a custom circuit of size

Additionally, theoretical computer scientists have detailed other sorting algorithms that provide better than time complexity with additional constraints, including:

  • Han's algorithm, a deterministic algorithm for sorting keys from a domain of finite size, taking time and space.[2]
  • Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking time and space.[3]
  • An integer sorting algorithm taking time and space.[4]

While theoretically interesting, to date these algorithms have seen little use in practice.

Summaries of popular sorting algorithms

Bubble sort

Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education.For example if we got 100 elements then the total number of comparisions will be 10000!. A slightly better variant, cocktail sort, works by inverting the ordering criteria and the pass direction on alternating passes. Its average case and worst case are both O(n²).

Selection sort

Selection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort requires (n - 1) swaps and hence Θ(n) memory writes. However, Selection sort requires (n - 1) + (n - 2) + ... + 2 + 1 = n(n - 1) / 2 = Θ(n2) comparisons. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms.

Insertion sort

Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists. This method is much more efficient than the bubble sort, though it has more constraints.

Shell sort

Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.

Merge sort

Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n).

Heapsort

Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time.

Quicksort

Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).

Bucket sort

Bucket sort is a sorting algorithm that works by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. A variation of this method called the single buffered count sort is faster than the quick sort and takes about the same time to run on any set of data.

Radix sort

Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.

Memory usage patterns and index sorting

When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.

For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.

One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".[5]

Another technique for overcoming the memory-size problem is to combine two algorithms in a way that takes advantages of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit easily in RAM (say, a few thousand elements), the chunks sorted using an efficient algorithm (such as quicksort or heapsort), and the results merged as per mergesort. This is less efficient than just doing mergesort in the first place, but it requires less physical RAM (to be practical) than a full quicksort on the whole array.

Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.

Graphical representations

Microsoft's "Quick" programming languages (such as QuickBASIC and QuickPascal) have a file named "sortdemo" (with extension BAS and PAS for QB and QP, respectively) in the examples folder that provides a graphical representation of several of the various sort procedures described here, as well as performance ratings of each.

See also

Notes and references

  1. ^ http://www.cs.duke.edu/~ola/papers/bubble.pdf
  2. ^ Y. Han. Deterministic sorting in time and linear space. Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, Montreal, Quebec, Canada, 2002,p.602-608.
  3. ^ M. Thorup. Randomized Sorting in Time and Linear Space Using Addition, Shift, and Bit-wise Boolean Operations. Journal of Algorithms, Volume 42, Number 2, February 2002 , pp. 205-230(26)
  4. ^ Y. Han, M. Thorup, Integer Sorting in Time and Linear Space. Proceedings of the 43rd Symposium on Foundations of Computer Science, 2002, p. 135-144.
  5. ^ tag sort Definition

External links