answersLogoWhite

0


Best Answer

The complexity of an algorithm can be expressed in terms of the amount of memory consumed and time taken relative to the number of inputs. In other words, the algorithm's time and space complexities can both be expressed as a function of n, the number of inputs.

Typically, we express algorithm complexities using Big-Omega notation, also known as Big-O, where O(1) denotes constant-time or constant-space, depending on which is being measured.

Constant-time algorithms that operate upon data sequences of n elements are not impossible. For example, the algorithm which returns the first element of a sequence of n elements can be achieved in constant-time if we also have constant-time access to the first element of the sequence (which is generally the case for the majority of data sequence containers).

More commonly, an algorithm operating upon a data sequence needs to perform one or more passes over that sequence, either in whole or in part. As such, a single pass over a sequence of n inputs would take O(n) time, also known as linear time.

The algorithm to search an unsorted sequence for a given value has a best case of O(1) and a worst case of O(n). The best case occurs when the first element happens to be the value we're looking for, while the worst case occurs when it is the last element or the value does not exist. If the value does exist, the average case is O(n/2), because there's a 50/50 chance the value will be somewhere in the first half of the sequence.

All sorting algorithms have a worst case of O(n*n). That is, 1 complete pass of the sequence places 1 element in its correct place, thus we must make n passes to place all n elements in their correct place. O(n*n) is known as quadratic time. For short sequences, O(n*n) is a perfectly acceptable time-complexity but for a larger sequence it is not. This is because a sequence of 10 elements would take just 100 units of time, but a sequence of 100 elements would take 10,000 units of time on the same machine. In other words, it would take 100 times longer to sort a sequence that is just 10 times larger. Similarly, a sequence of 1000 elements (100 times larger) would take 1,000,000 units of time (10,000 times longer).

As n increases, the time taken to sort n elements inevitably increases, however we can slow down the increase in time relative to an increase in n by using a more efficient algorithm. The quicksort algorithm has an average case of O(n log n), where log n is the binary logarithm of n. This is known logarithmic time. Quicksort is ideal for sorting large sequences because time increases more slowly, to the extent we can effectively double the number of elements before we see any significant change in the time taken.

Quicksort works be selecting a pivot value from the unsorted set, and dividing the set into two parts around the pivot, such that all values in the first part are less than the pivot while values greater than or equal to the pivot are placed in the second part. The pivot is placed between the two parts and is therefore in its correct place. We then repeat the process with each of the two parts. When a part has just zero or one elements, it is completely sorted. When all parts are sorted, the whole set is sorted.

Clearly, pivot selection is fundamental to the algorithm's efficiency. Ideally, we want to select the median of all elements (the middle value) such that the two partitions are of equal size (differing by no more than 1 element). However, the time-complexity of calculating the median of an unsorted set is O(n), and that's far too much overhead. Instead, we select the median of the first, last and middle elements (median-of-three) because this can be achieved in constant-time (depending on the container type, hence quicksort is ideally suited to arrays).

Quicksort can be improved further with a three-way partition, where all elements in the middle partition are equal to the pivot value and therefore don't need to be sorted (they are already in the correct place relative to the other two partitions). The added complexity of creating the third partition is offset by the fact the third partition will reduce in size more quickly than it otherwise would.

Of course, complexity is merely an indication of performance. Two sorting algorithms with exactly the same complexity may perform very differently indeed. The type of the sequence container can often be a major factor too. For example, arrays will generally outperform lists because lists do not have constant-time random access. Arrays also consume less memory than lists. However, a list stored in main memory will typically outperform an array stored in a disk file. Complexity does not take these factors into account. Moreover, time-complexity is not an expression of time taken because the actual runtime performance depends on the hardware. However, while a single "unit of time" is not defined; the time-complexity tells us roughly how many units of time we can expect to be consumed by the algorithm. All else being equal, that number should be the same regardless of the hardware.

As well as time complexity, we also have to consider the space complexity as we often have to sacrifice one for the other. Space complexity relates to memory consumption and is usually of more importance when working with memory-limited hardware. However, space-complexity does not include the space occupied by the inputs, but rather the amount of additional space required by the algorithm relative to the number of inputs. For example, quicksort has a space complexity of O(log n) due to its recursive nature, with a worst case of O(n). By contrast, insertion sort (which is non-recursive) has a space-complexity of O(1); the amount of additional memory is the same regardless of the number of inputs but it has a time-complexity of O(n*n). As such, it is ideally suited to smaller sets.

With sorting algorithms we must also consider stability. A stable sort places equal elements in the same order they were input, while an unstable sort does not guarantee this. Stability is important when working with elements that can be sorted by more than one field (secondary fields). For example, when sorting files by name and then subsequently sorting by type, we expect those of the same type to remain in name order because that was the order of input. With a stable sort, we don't have to keep track of secondary keys but with an unstable sort we do. Although unstable sorting algorithms are generally quicker than stable ones, if stability is required we must sacrifice at least some performance to maintain stability.

User Avatar

Wiki User

6y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why does algorithm efficiency matter when the number of inputs becomes large?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What are importance of algorithm?

Algorithms are the foundation of computer Science, it is telling the computer to do the task in the most efficient matter. An algorithm is particularly important in optimizing a computer program, the efficiency of the algorithm usually determines the efficiency of the program as a whole.


What are the inputs of photosynthesis?

The inputs for photosynthesis are light (which is energy), water (which is matter), and carbon dioxide (which is also matter).


How does efficiency matter to car designers?

Efficiency sells!


When matter is heated does it contract?

No, when matter becomes heated it always expands, meanwhile when matter becomes cool it always contracts.


How do you Devising the algorithm?

the process of devising an algorithm is both an art and a science.This is one part that cannot be automated fully.To give a problem description,one have to think of converting this into a series steps,which,when executed in a given sequence solve the problem.To do this ,one has to be familiar with the problem domain and also the computer domains.This aspect may never be taught fully and most often,given a problem description,how a person proceeds to convert it into an algorithm becomes a matter of his "stype".


When does matter turn into plasma?

Matter becomes a plasma when it becomes a gas and the gas is ionized (electrically charged)


How is star matter recycled?

a star explodes or becomes a white dwarf, then the matter of that star becomes other things.


When matter is cooled the internal motion of molecules becomes?

It becomes slow


What is sub-algorithm?

It is an algorithm used by another algorithm as part of the second algorithm's operation.As an example, an algorithm for finding the median value in a list of numbers might include sorting the numbers as a sub-algorithm: There are plenty of algorithms for sorting, and the specifics of the sorting does not matter to the "median value" algorithm, only that the numbers are sorted when the sub-algorithm is done.For what an algorithm is, see related link.


What happens to the matter in a log when it burns in a fireplace?

it becomes a new kind of matter


What does organic matter become?

It becomes humus!


How do you know if an algorithm is effective?

If you mean "does what it is meant to do", you look for a mathematical or logical proof that the algorithm returns the correct answer over all possible inputs.If you mean "efficient", you do a simplified count of the number of operations it takes for the algorithm to complete and how it relates to the size of the input.Efficiency uses the notation O(n), where n is the size of the input. The O stands for "order of" to signify that we are not looking at exact counts, but an approximation.O(1) means that the running time of the algorithm is constant no matter how big the input is, this is as efficient it can get. An example is finding the maximum in a sorted list of numbers: The maximum will always be the last number in the list, so you don't even have to look at the rest of the numbers, and it doesn't matter how many of them there are.O(n) means that the running time increases linearly with the size of the input, which is reasonably efficient. An example is finding the maximum in an unordered list of numbers, it requires looking at every value exactly once.O(n*n) means that the running time increases with the square of the size of the input, which is less efficient. An example is comparing every element in a list to every other element in the list.When you have an efficiency analysis of the algorithm, you can compare it to the efficiency of other algorithms solving the same problem.See related links.