answersLogoWhite

0


Best Answer

The earliest computers could only send 8 bits at a time, it was only natural to start writing code in sets of 8 bits. This came to be called a byte.


It's essentially arbitrary, however 8 bits was initially a good fit since it was able to easily fit all letters, upper and lower case, plus all punctuation and numerals. A Byte is no longer a hard boundary, however. There are many protocols that use individual bits, or groups of 4 bits, or any number really. Modern processors also use 32-bit "words" and 64-bit "words".

User Avatar

Wiki User

14y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

16y ago

A byte is eight bits because in the 1960s, the most popular processors used 8-bit registers which allowed them to manipulate 8 bits at a time. This common usage of the term byte eventually became the standard due to its widespread usage.

This answer is:
User Avatar

User Avatar

Wiki User

15y ago

1 Byte = 8 binary bits, 8 binary bits can be anything between 0 and 255, Bin 00000000 = 0, Bin 10101010 = 170 and Bin 11111111 = 255 that is Hex FF . In digital electronics a 1 is high (5 volt) and a 0 is low (zero volt)

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

The main reason why bits are grouped together is to represent characters. Because of the binary nature of all thinks computing, ideal 'clumps' of bits come in powers of 2 ie 1,2,4,8,16,32...... (basically because they can always be divided into smaller equal packages {it also creates shortcuts for storing size, but that's another story}). Obviously 4 bits {nybble in some circles} can give us 2^4 or 16 unique characters. As most alphabets are larger than this, 2^8 (or 256 characters) is a more suitable choice.

Originally (until the mid 1950's) the term byte was used for a string of bits (of any length - usually 1 transmission - that is 1 (command) sequence or similar). The origin of the word is unclear but is thought to come from around the same time when Werner Bucholz (or similar) used the word bite (derived from, but distinct to bit) to describe a bitstring that could encode a character to be transmitted between peripherals. To avoid spelling problems this eventually became byte (hence nybble for half-byte).

I have also seen references to BYTE being an acronym, most commonly Binary Yoked Transfer Element (see acronymfinder.com). There is a vague mention on Dictionary.com of another possible origin from an IBM acronym but I suspect that bite may have changed to byte for a little bit of both reasons.

Machines exist that have used other length bytes (particularly 7 or 9). This have not really survived merely because they are not as easy to manipulate. You certainly can't split an odd number in half, which means if you were to divide bytes, you would have to keep track of the length of the bitstring.

Finally, 8 is also a convenient number, many people (psychologists and the like) claim that the human mind can generally recall only 7-8 things immediately (without playing memory tricks).

This answer is:
User Avatar

User Avatar

Wiki User

9y ago

Historically, a byte was the number of bits used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size.

In 1963, ASCII, a 7-bit code, was adopted as a Federal Information Processing Standard, making 6-bit bytes commercially obsolete.

In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit µ-law encoding. IBM at that time extended its 6-bit code "BCD" to an 8-bit character code. 8-bit data "octets" were then adopted as the basic data unit of the early internet.

Since then 1byte = 8-bits is used as a standard.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why choose 8 bit is equal to 1 byte?
Write your answer...
Submit
Still have questions?
magnify glass
imp