answersLogoWhite

0


Best Answer

For a list with n elements, the expected cost is the same as the worst-case cost, which is O(n). The average cost will be O(n/2). However, if the list is ordered by probability and geometrically distributed, the complexity becomes constant, O(1). Compare with a binary search which has a cost of O(log n).

User Avatar

Wiki User

11y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

9y ago

Best case is O(1), worst case is O(n) and average is O(n/2). The reason is that with linear search you start at the beginning of the sequence and test each element of the sequence one by one until you find the one containing the value you seek. The assumption with linear search is that the elements are not ordered, thus there's no way to narrow the search down with a non-linear search such as binary search. The worst cases are when the element does not exist or it is the last element since you need to check every element, thus O(n). The best case is O(1) which only occurs when the first element happens to be the one you seek. But the average case is O(n/2) because, on average, half the elements you seek will be found in the lower half while all others will be found in the upper half. When the elements are ordered, you can start in the middle and, if that's not the value you're looking for, you can eliminate the half that does not contain the element. By repeatedly checking the middle element of the remainder of the sequence, you can narrow the search down in fewer steps, with a worst case of O(log n), which is way better than the average for a linear search.

This answer is:
User Avatar

User Avatar

Wiki User

8y ago

Best case is O(1) when the value being sought is at the start of the sequence. Worst case is O(n) for a sequence of n elements, where the value being sought is at the end of the sequence. The average case is therefore O(n/2).

This answer is:
User Avatar

User Avatar

Wiki User

13y ago

What linear search algorithm, you do not specify

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

Worst case O(n)

Average Case Theta(n)

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

For n elements, O(n/2) on average.

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

o(n)

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is worst case and average case complexity of linear search algorithm with explanation?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Engineering

The time complexity of the sequential search algorithm is?

O(N) where N is the number of elements in the array you are searching.So it has linear complexity.


Which algorithm is slowest and why?

An algorithm that takes infinite time is the slowest. If the time complexity is O(infinity) then the algorithm may never produce a result. Algorithms with O(infinity) complexity are of no practical use whatsoever, other than to demonstrate how not to write an algorithm. As an example, consider an algorithm that sorts elements in an illogical manner, such as by shuffling the elements randomly and then testing to see if they are in the correct order. If you can imagine a deck of 52 playing cards you will know how impossible it is to shuffle the cards and get them all in the correct order. An algorithm doesn't need to have O(infinity) complexity to be considered useless for practical purposes. Bubblesort is a typical example of a useless algorithm. It has an average time complexity of O(n^2) (where n is the number of elements) which is considered reasonable for a small set of data but is impractical for large sets. However, it is not the time complexity that is the problem, it is the sheer number of swapping operations required upon each iteration. Insert sort has the same time complexity but is significantly faster on average because it has no swap operations. Even so, it has no practical uses because, like Bubblesort, it is only suitable for relatively small sets. For practical sorting purpose you need a non-linear algorithm with a time complexity of O(n log n) and, in this respect, hybrid sorts like Introsort win hands-down.


What is the algorithm used in the Codebook Excited Linear Predictor voice CELP compression?

The Code Excited Linear Prediction or CELP is a speech coding or digital voice algorithm. This is a high quality new speech code.


WHAT IS THE DIFFERENT algorithm of advantage and amp disadvantage?

Different algorithms do different things, so it makes no sense to compare them. For example, the accumulate algorithm is an algorithm which performs the same operation upon every element of a container, whereas a sorting algorithm sorts the elements of a container. Each specific algorithm requires a different set of concepts. An accumulate algorithm requires a data sequence with at least forward iteration and elements which support the operation to be performed, whereas a sorting algorithm generally requires random access iterators and elements that support a given comparison operation (such as the less-than operator).Even if two algorithms have the exact same time and space complexities, it does not follow that both will complete the task in the same time. For instance, the accumulate algorithm is a linear algorithm with a time-complexity of O(n) regardless of which operation is being performed. However, the complexity of the operation itself can greatly affect the actual time taken, even when the operations have exactly the same time-complexity. For instance, if we use the accumulate algorithm in its default form (to sum all the elements in a data sequence), the operation itself has a constant-time complexity of O(1). If we choose another operation, such as scaling each element and summing their products, it will take much longer to complete the algorithm (possibly twice as long) even though the operation itself has the exact same time-complexity, O(1).Consider the time-complexity of adding one value to another:a += bThis has to be a constant-time operation because the actual values of a and b have no effect upon the time taken to produce a result in a. 0 += 0 takes exactly the same number of CPU cycles as 42 += 1000000.Now consider the operation to scale and sum:a += b * 42Here, 42 is the scalar. This also has to be a constant-time operation, but it will take longer to physically perform this operation compared to the previous one because there are more individual operations being performed (roughly twice as many).The only way to compare algorithms is to compare those that achieve exactly the same goal but do so in different ways. Only then does comparing their respective time-complexity make any sense. Even so, time-complexity is merely an indication of performance so two sorting algorithms with the exact same time-complexity could have very different runtime performance (it depends on the number and type of operations being performed upon each iteration of the algorithm). Only real-world performance testing can actually determine which algorithm gives the best performance on average.With sorting algorithms, we often find one algorithm ideally suited to sorting small sequences (such as heap sort) and others ideally suited to larger sets (such as merge sort). Combining the two to create a hybrid algorithm would give us the best of both worlds.


What is big-o notation for describing time complexity of algorithm?

Big O notation allows to specify the complexity of an algorithm in a simple formula, by dismissing lower-order variables and constant factors.For example, one might say that a sorting algorithm has O(n * lg(n)) complexity, where n is the number of items to sort.Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

Related questions

The time complexity of the sequential search algorithm is?

O(N) where N is the number of elements in the array you are searching.So it has linear complexity.


Which algorithm is slowest and why?

An algorithm that takes infinite time is the slowest. If the time complexity is O(infinity) then the algorithm may never produce a result. Algorithms with O(infinity) complexity are of no practical use whatsoever, other than to demonstrate how not to write an algorithm. As an example, consider an algorithm that sorts elements in an illogical manner, such as by shuffling the elements randomly and then testing to see if they are in the correct order. If you can imagine a deck of 52 playing cards you will know how impossible it is to shuffle the cards and get them all in the correct order. An algorithm doesn't need to have O(infinity) complexity to be considered useless for practical purposes. Bubblesort is a typical example of a useless algorithm. It has an average time complexity of O(n^2) (where n is the number of elements) which is considered reasonable for a small set of data but is impractical for large sets. However, it is not the time complexity that is the problem, it is the sheer number of swapping operations required upon each iteration. Insert sort has the same time complexity but is significantly faster on average because it has no swap operations. Even so, it has no practical uses because, like Bubblesort, it is only suitable for relatively small sets. For practical sorting purpose you need a non-linear algorithm with a time complexity of O(n log n) and, in this respect, hybrid sorts like Introsort win hands-down.


What is search algorithm?

The linear search algorithm is a special case of the brute force search.


What has the author Charles Blair written?

Charles Blair has written: 'The iterative step in the linear programming algorithm of N. Karmarkar' 'The computational complexity of multi-level linear programs' 'Representation for multiple right-hand sides' 'Random linear programs with many variables and few constraints' 'Ascent Ray Theorems and some applications' -- subject- s -: Linear programming


What is linear searching?

The linear search algorithm is a special case of the brute force search.


Complexity of linear search?

the compexity of linear search in worst case is f(n) = n+1


What is the algorithm used in the Codebook Excited Linear Predictor voice CELP compression?

The Code Excited Linear Prediction or CELP is a speech coding or digital voice algorithm. This is a high quality new speech code.


Will SELECT algorithm work in linear time if they are divided into groups of 7?

45


How can you convert a simple algorithm to recursive algorithm?

Linear search(a,item) n=length(a) for i=1 to n do if(a[i]==item) then return i end for return -1


How would you find a linear relationship from a verbal explanation?

No


Definition of linear regression algorithm in Machine learning?

The linear regression algorithm offers a linear connection between an independent and dependent variable for predicting the outcome of future actions. It is a statistical method used in machine learning and data science forecast analysis. For more information, Pls visit the 1stepgrow website


WHAT IS THE DIFFERENT algorithm of advantage and amp disadvantage?

Different algorithms do different things, so it makes no sense to compare them. For example, the accumulate algorithm is an algorithm which performs the same operation upon every element of a container, whereas a sorting algorithm sorts the elements of a container. Each specific algorithm requires a different set of concepts. An accumulate algorithm requires a data sequence with at least forward iteration and elements which support the operation to be performed, whereas a sorting algorithm generally requires random access iterators and elements that support a given comparison operation (such as the less-than operator).Even if two algorithms have the exact same time and space complexities, it does not follow that both will complete the task in the same time. For instance, the accumulate algorithm is a linear algorithm with a time-complexity of O(n) regardless of which operation is being performed. However, the complexity of the operation itself can greatly affect the actual time taken, even when the operations have exactly the same time-complexity. For instance, if we use the accumulate algorithm in its default form (to sum all the elements in a data sequence), the operation itself has a constant-time complexity of O(1). If we choose another operation, such as scaling each element and summing their products, it will take much longer to complete the algorithm (possibly twice as long) even though the operation itself has the exact same time-complexity, O(1).Consider the time-complexity of adding one value to another:a += bThis has to be a constant-time operation because the actual values of a and b have no effect upon the time taken to produce a result in a. 0 += 0 takes exactly the same number of CPU cycles as 42 += 1000000.Now consider the operation to scale and sum:a += b * 42Here, 42 is the scalar. This also has to be a constant-time operation, but it will take longer to physically perform this operation compared to the previous one because there are more individual operations being performed (roughly twice as many).The only way to compare algorithms is to compare those that achieve exactly the same goal but do so in different ways. Only then does comparing their respective time-complexity make any sense. Even so, time-complexity is merely an indication of performance so two sorting algorithms with the exact same time-complexity could have very different runtime performance (it depends on the number and type of operations being performed upon each iteration of the algorithm). Only real-world performance testing can actually determine which algorithm gives the best performance on average.With sorting algorithms, we often find one algorithm ideally suited to sorting small sequences (such as heap sort) and others ideally suited to larger sets (such as merge sort). Combining the two to create a hybrid algorithm would give us the best of both worlds.