answersLogoWhite

0


Best Answer

Mainly because of speed. Insertion sort moves data every time a value is inserted, so execution time rises more or less exponentially with volume of data. There are much better sorting algorithms for large amounts of data, for example quicksort, which moves data fewer times before the list is fully sorted.

User Avatar

Wiki User

15y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why insertion sort is not suitable for large volume of data?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

Who is best merge sort or insertion sort?

Merge sort is good for large data sets, while insertion sort is good for small data sets.


What method is most suitable of transferring large amounts of data?

Networking.


What volume of data is processed by minicomputers?

Minicomputers are able to process large amounts of data.


Why AVL tree is considered ideal for data node search but not considered as most suitable in case of insertion and deletion of the data node?

Insertion and extraction operations have a runtime performance cost due to the need to maintain balance. The more nodes you insert or extract at a time, the more significant that cost will become.


Why are keyboards not suitable to input large amounts of data?

Keyboards are not used for inputting large amounts of data because; it is too time consuming and mistakes can be common. A better alternative when inputing large amounts of data would be a microphone.


What are the Examples of data processing systems?

Commercial data processing "involves a large volume of input data, relatively few computational operations, and a large volume of output." Accounting programs are the prototypical examples of data processing applications. Information systems (IS) is the field that studies such organizational computer systems.


What is insertion in Data Structure?

it is when you put data in a form of structure on a memory disk or anything that inputs something.


How do you insert data in middle of file in java?

One way to do it would be as follows: * Read the entire file to a String variable * Write the data before the insertion point * Write the data to be inserted * Write the data after the insertion point Probably the following would be more efficient: * Read the part of the file after the insertion point, to a String variable * Write the data to be inserted * Write the data after the insertion point Perhaps some classes have methods that can automate this, from the point of view of the programmer. But if you want to INSERT something, it's unavoidable to have the overhead of reading the data after the insertion point, and writing it back again. This assumes you use a text file; when working with a database, there are other, usually more efficient, options.


What is data collected on large populations and stored in databases referred to as?

Data collected on large populations and stored in databases is referred to as big data. This type of data is typically characterized by its volume, velocity, and variety, and requires specialized tools and techniques to analyze and derive insights from.


Is volume a kind of quantitive data?

Yes, volume is a kind of quantitative data.


The advantages and disadvantages of primary data?

Advantages and Disadvantages of primary data are: Advantages 1. Data is basic 2. Unbiased information 3. Original data 4. Data from the primary market/ population 5. Data direct from the population. Disadvantages. 1. Large volume of data. 2. Huge volume of population. 3. Time consuming 4. Direct and personal intervention has to be there. 5. Raw data.


Why data integrity is important in DBMS?

Data integrity is crucial in a DBMS because it ensures the accuracy, consistency, and reliability of data stored in the database. It maintains the integrity of the data by enforcing defined rules and constraints that prevent unauthorized or inconsistent modifications. Data integrity is essential for making informed decisions, ensuring data quality, and maintaining the overall trustworthiness of the database.