Sorting summary search algorithm
One , Binary search
Algorithm process :
A, Assume that the elements in the table are in ascending order
B, If the intermediate element is equal to the search target , Then the search is successful , Otherwise, the middle element is used to divide the table into two ordered sub tables
C, If the search target is less than the intermediate element , In the previous sub table , Otherwise, it is found in the following sub table
D, Repeat the above process , Until the search is successful , Or the factor table does not exist and the lookup fails
evaluate :
1). It is more efficient than linear search , In the worst case comparison is needed log2N second
2). The required table must be ordered , It is especially suitable for the linear table which has been established and needs to be searched frequently
3). It is recursive in structure , It can be implemented recursively

code implementation :

int BinarySearch(int data[],int left,int right,int key){ int mid = (left +
right)/2; if(left <= right){ if(key < data[mid]){
BinarySearch(data,left,mid-1,key); } else if(key > data[mid]){
BinarySearch(data,mid+1,key); } else{ return mid; } } else{ return -1; } } Two ,
Sorting algorithm ( The default is ascending sort )
Related concepts :
Inner sort : In the sorting process , If the entire table is in memory , Row The sequence does not involve the inner of data , External deposit exchange , This is called inner ordering     
Out of sort : If the sorting process needs to exchange data between internal and external storage , be
It's called outer ordering
Stable sequencing : Suppose the same element exists in the sequence to be sorted , After sorting
after , The order between the two elements does not change , It is called stable sorting , Otherwise, it is unstable
1. Exchange sort :
basic thought : Compare the keywords of the records to be sorted in pairs , When two records are found to be in reverse order, they are exchanged , Until there are no records in reverse order .
Bubble sort and quick sort are common
1). Bubble sort
Algorithm process :
A. Pairwise comparison of adjacent elements , The former is greater than the latter , Exchange with each other
B. From the first pair to the last pair , The largest element settles to the end
C. For unsorted parts , Repeat the above steps , The value of the second largest settlement
D. Fewer and fewer elements are scanned each time , Until there is no exchange
evaluate :
A. Average time complexity :O(N2);
B. It belongs to stable sort
C. Very sensitive to the ordering of data ( That is, when the table is closer to the ordered state , Bubble sort is probably the fastest sort )
code implementation :

void bubble_sort(int data[],size_t size){ int i,j; for(i = 0;i < size -
1;i++){ int ordered = 1; for(j = 0;j < size-1-i;j++){ if(data[j+1] < data[j]){
int swap = data[j]; data[j] = data[j+1]; data[j+1] = swap; ordered = 0; } }
if(ordered) break; } }2) Quick sort
Algorithm process :
A. Select any element from the sequence to be sorted as a benchmark
B. Places all elements smaller than the datum before the datum , Elements larger than the datum are placed after the datum , Any element equal to the datum is placed before or after the datum , This process is called grouping
C. In a recursive way , Continue grouping the groups before and after the benchmark, respectively , Until the number of elements in each group is no more than 1 individual
evaluate :
A. Average time complexity :O(NlogN)
B. It's not a stable sort
C. It is not sensitive to the order of data
code implementation :
void QuickSort(int data[],size_t left,size_t right){ size_t mid = (left +
right)/2; int tmp = data[mid]; size_t i = left,j = right; while(i < j){ for(;i
< mid && data[i] <= tmp;i++);// Find the subscript for a value greater than the baseline if(i < mid){ data[mid] = data[i];
mid = i; } for(;j > mid && data[j] >= tmp;j--);// Find the subscript for a value smaller than the baseline if(j > mid){
data[mid] = data[j]; mid = j; } } data[mid] = tmp; if(mid - left > 1)
QuickSort(data,left,mid - 1); if(right - mid > 1) QuickSort(data,mid +
1,right); }2. Insert sort :
basic thought : One record to be sorted is inserted into the
The appropriate position in the previously ordered sub table , Until all records are inserted
1). Direct insertion sort
Algorithm process :
A. The sequence is divided into two partitions at the beginning , There is only the first element in the first partition , Natural order , The second partition contains the remaining elements
B. Take out the next element , Scan sorted sequence from back to front
C. Move back those that are larger than the extracted elements
D. Less than or equal to the removed element , Insert the extracted element into it
E. Repeat step B, Until all the elements are processed
evaluate :
A. Average time complexity :O(N2)
B. It belongs to stable sort
C. Very sensitive to the ordering of data
D. No swap, just move , The efficiency is slightly higher than that of bubble sorting
code implementation :
void InsertSort(int data[],size_t size){ size_t i; for(i = 1;i < size;i++){
int inserted = data[i]; size_t j; for(j = i;j > 0 && inserted < data[j-1];j--){
data[j] = data[j-1]; } if(j != i) data[j] = inserted; } }2). Shell Sort
Algorithm process :
A. Select one less than the length of the sequence n Positive integer of d1 As an increment , Divide the sequence into d Groups
B. Sort each group by direct insertion , Make it an ordered sequence
C. Select a less than d1 Positive integer of d2, Divide the sequence into d2 Groups
D. Repeat step B and C, Until the increment obtained is equal to 1
evaluate :
A. Average time complexity :O(N1.3)[ Because the performance of Hill sort depends on the size of the incremental selection , However, there is no final conclusion about the size of increment , Therefore, it is generally considered that the average time complexity is O(N1.3)]
B. It's an unstable sort
C. Sorting efficiency is usually higher than direct insertion sort
D. It belongs to in place sorting
code implementation :
void ShellSort(int data[],size_t size){ int i,j,gap,tmp; gap = size/2;
while(gap > 0){ for(i = gap;i <= size;i++){ tmp = data[i]; j = i - gap; while(j
>= 0 && tmp < data[j]){ data[j+gap] = data[j]; j = j - gap; } data[j+gap] =
tmp; } gap /= 2; } }3. Select sort
basic thought : Select the record with the smallest key from the records to be sorted , The order is placed at the end of the ordered sub table , Until all the records are sorted
There are direct selection sort and heap sort
1). Direct selection sort
Algorithm process :
A. Find the smallest element in the entire sequence , Exchange with the first element when found
B. Finding the smallest element in the remaining sequence , Exchange with secondary elements when found
C. and so on , Until the remaining sequence contains only one element
evaluate :
A. Average time complexity : O(N2)
B. It's not a stable sort
C. It is not sensitive to the order of data
D. Less exchange times , Better than bubbling
code implementation :

void select_sort(int data[],size_t size){ size_t i; for(i = 0;i < size -
1;i++){ size_t min = i,j; for(j = i+1;j < size;j++) if(data[j] <= data[min])
min = j; if(min != i){ int swap = data[i]; data[i] = data[min]; data[min] =
swap; } } }4. Merge sort
basic thought : Merge two or more ordered tables into a new ordered table multiple times . The simplest merging is to merge two ordered sub tables into one ordered table , That is, the merger of the two routes
Algorithm process :
A. Merge each element in the sequence independently ( A single element must be an ordered table )
B. and so on , The sequence is merged continuously .
C. If the number of elements in the sequence is odd , The length of the last sub table is smaller than that of the other sub tables , Then modify the upper bound of the sub table and merge it
evaluate :
A. Merge sort is a kind of stable sort
B.
Merge sort is easy to implement on linked list . For a length of n Table of , It needs to be done log2n Suborder , The time of each merger is O(n), Its time complexity is at best and worst O(nlog2n)
C. Merging and sorting , Each time, an auxiliary vector is needed to hold two ordered sub tables to hold the results , But after the sorting is finished, the space is released , So the total auxiliary space complexity is O(n). Merge sort is not in place sort
code implementation :

// Algorithm implementation of one-time merging void Merge(int data[],int low,int mid,int high){ int* R1 =
(int*)malloc((high - low + 1) * sizeof(int)); // Allocate a secondary storage space int i = low,j = mid
+ 1,k = 0; while(i <= mid && j <= high) if(data[i] <= data[j]){ R1[k] =
data[i]; k++; i++; } else{ R1[k] = data[j]; k++; j++; }
// When the length of two sub tables is different , Copy incomplete child tables to R1 In the middle while(i <= mid){ R1[k] = data[i]; k++; i++; }
while(j <= high){ R1[k] = data[j]; k++; j++; } // Will the whole R1 Array back to data In the middle , Complete a merger for(k
= 0,i = low;i <= high;k++,i++) data[i] = R1[k]; free(R1); } void MergePass(int
data[],int length,int n){ int i; for(i = 0;i + 2 * length - 1 < n;i += 2 *
length) Merge(data,i,i + length - 1,i + 2 * length - 1);
// When the number of array elements to be sorted is odd , Continue to merge the remaining two sub tables if(i + length - 1 < n) Merge(data,i,i + length -
1,n - 1); } void MergeSort(int data[],int n){ int length; for(length = 1;length
< n;length *= 2) MergePass(data,length,n); } Each sorting method has its own limitations , The most suitable algorithm should be selected according to different environment .
summary :

1. When the number of sequences to be sorted n less (n<=50), You can use direct insertion or direct selection sorting . When the record size is small , Direct insertion sort is better . Otherwise, because the number of records to be moved by direct selection is less than that by direct insertion , The direct selection sorting should be selected .
2. If the state of the sequence is basically ordered , Direct insertion should be used , Bubble or random quick sorting is appropriate

3. When n more , The time complexity of O(nlog2n) The sorting method of , Quick sort , Merge sort . Quick sort is considered to be a better method among the internal sorting methods based on comparison . When the keywords to be sorted are randomly distributed , Quicksort has the shortest average time . But this sort of ranking is unstable , If stable sorting is needed , Then merge order should be adopted .
4. If there are two ordered tables , To merge them into a new ordered table , The best way is to merge and sort

Technology
©2019-2020 Toolsou All rights reserved,
about String How to create objects JavaScript Hundred refining into Immortals 1.15 Legendary Is MCU embedded , It's a cliche Resume the 13th session python Blue Bridge Cup html+css+js Make a simple website home page java Connect to the database to realize basic addition, deletion, modification and query VHDL——JK trigger Java of JDBC programming 3 4j It's not legal python expression _3+4j It's not legal Python expression .【linux】shell: ordinary shell Script exercise