answersLogoWhite

0

Why a byte is equal to 8 bits?

Updated: 10/24/2022
User Avatar

Wiki User

10y ago

Best Answer

IBM created the word "byte" on the Stretch Project. It was a variable length data element originally defined as 1 to 16 bits in length. Most computers of the time used characters that were 6 bits in length, but this allowed only for uppercase letters. Because the byte was to be used for binary bits (1 bit in length), decimal digits in arithmetic (4 bits in length), baudot characters used by teletypes of the time (5 bits in length), standard characters (6 bits in length), various extended characters with both uppercase & lowercase letters as well as foreign language characters if needed (up to 16 bits in length), and other sizes for special applications; the variable length feature of the byte was seen as essential for orthogonality in the instruction set for all data elements smaller than a full 64 bit binary word.

Unfortunately as the project proceeded and architectural and instruction set decisions were being finalized, it was realized that there were not enough bits available in the instruction words that processed byte data for a length field for the full range of 1 to 16 bits. The decision was made to use a smaller length field and reduce the range to 1 to 8 bits. This should still be adequate for everything from binary bits to decimal digits to extended characters, the lost range did not bother the potential customers. Bytes could span the boundaries between 64 bit words (i.e. start in one word and finish in the next).

The finished machine, the IBM 7030 Stretch, was delivered in 1961. However it had performance problems and IBM discontinued the model when the original 9 orders had been filled without taking anymore orders.

When IBM began work on what was to become the System 360 family of compatible computers, they decided to include the concept of the "byte" but decided that making it variable length as it was in the 7030 added too much circuit complexity and cost which would be unacceptable in lower end models of the 360. IBM decided to use the maximum size of the 7030 byte (8 bits) as the size in the System 360, also bytes had to all be aligned on byte boundaries with the full 32 bit binary word to further simplify the addressing circuits.

The resulting family of computers were announced in 1964, with initial models beginning delivery in 1965.

Other computer manufacturers soon started copying IBM and began using the byte in their machines and the 8 bit byte became a defacto industry standard.

User Avatar

Wiki User

10y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why a byte is equal to 8 bits?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about General History

How many bits make 1.44 MB?

...was a popular size for 3.5" floppy disksTechnically, 1.44 megabyte = 1.44 × 1000 × 1000 = 1,440,000 bytesHowever, the "1.44 MB" disk actually had 1.44 × 1024 × 1000 = 1,474,560 bytes, using a very non-standard unit. This is equal to 1.47 MB or 1.41 MiB.Since there are 8 bits per byte, this is 11,796,480 bits.


In computer terms what is a nibble?

In computer terms a nibble = 4 bits = 1/2 byte. You can further define the data segment as: Crumb = 2 bits Nibble = 4 bits Byte = 8 bits Word = 16 bits Double Word=32 bits Jury still out on 64 bits and Sentence In keeping with the spelling of "byte", the lick and nibble are sometimes spelled "lyck" and "nybble" ------------------- A nibble is half a byte, but believe it or not, a byte does not necessarily have to have eight bits. I don't know of any computer platform that uses anything but 8 bit bytes these days, but in computer science terms, a byte is generally the smallest addressable element, and the size needed to store a character. You also might be surprised to know that not all machines use ASCII (eight bit bytes) characters. IBM Mainframes still use EBCDIC under their traditional operating systems, and that stands for Extended Binary Coded Decimal Interchange code, which accounted for the lion's share of data until a few decades ago. It's an extended version of BCD, which uses 4 bits to express numbers, and there's no technical reason that a BCD based machine couldn't have 4 bit bytes. It's unlikely that you will ever encounter a computer that doesn't use eight bit bytes, but you may encounter people who studied computer science back in the 1970s. Back in the "old" days (the 1960's) when computer's didn't have operating systems or hign level programming languages, you always dealt with the byte. On some machines the byte was 8 bits and on others it was 8 bits + a parity bit for 9 bits. There was "even" parity and "odd parity", meaning you set the parity bit on if an even number of bits = 1 in the original 8 bits, or set it on for an odd number of bits = 1 in the original 8 bits. The "word" was originally set to be the size of a register (everything was done through a set of registers). The registers were used to assemble the current instruction that you wanted the computer to execute (what kind of action, like move a byte, add, subtract etc., plus the final length of your data, which determined how many cycles the computer had to go through to execute your instruction, plus where the data was coming from and going to). The "word" length was pegged to the length of the register, meaning that in treating the computer like a book, each register was a word. Since the first computers were totally byte oriented, a word was 8 bits. When 16-bit registers were implemented, they became 16 bits, then 32 bits and now 64 bits. There are some computers today that even have 128 bit words. So a "word" is the length of the registers in whatever computer you are using. It is also the biggest chunk of bits that the computer can process at one time. The word "nibble" was invented to specify the high-order 4 bits in a byte or the low-order 4 bits in a byte (like eating a nibble from a cookie, instead of the whole cookie). Since a number can be specified in 4 bits, you only needed a "nibble" to store a number. So, if you had a field that was all numbers, you could write it out in "nibbles", using half the space you would have used if it was in bytes. Back in those days, space counted. The first "mainframe" computers had 4k of memory (no, that really is 4k), so you didn't have any space to waste if you were doing something like payroll or inventory management. In some cases, individual bits within bytes are used to store flags (yes or no for a given attribute) and, in at least one IBM manual, these were referred to as tidbits. IBM was not known for a sense of humor, but the term never became a generally accepted abbreviation.


How many mega bytes equal 1 giga byte?

1024 megabytes = 1 gigabyte 1024 kilobytes = 1 megabyte


When was the 8 bit CPU invented?

Hard to say.The Colossus (1944) built to break German High Command teletype cyphers processed 8 bits of the code stream at a time. But only 5 of the 8 were encrypted character data.Many 1950s business computers operated on characters instead of words, but these were usually 6 bit characters not 8 bit characters.The IBM 7030 Stretch (1961) defined a byte a a variable length of 1 to 8 bits.The IBM System 360 (1964) defined a byte as 8 bits. The 360 was also the first computer to address memory at the byte/character level, regardless of the size or type of data being accessed. The 360/20 and 360/30 implementations used microcoded 8 bit processors to interpret the System 360 instruction set, but other versions of the 360 used processors with larger word size (both microcoded and hardwired logic implementations).During the minicomputer era CPUs were frequently either 12 bit (e.g. DEC PDP-8) or 16 bit (e.g. DEC PDP-11, Data General Nova) but there may have been some 8 bit CPUs in this period.Intel's 8008 (1972) was the first 8 bit microprocessor CPU.Probably the microcoded 8 bit CPU of the IBM System 360/30, introduced in 1964 was the first 8 bit CPU ever delivered to customers.


What is byte addressability?

Early computers were placed in 2 categories, allowing them to be optimized to their user's needs:scientific - these computers had large fixed wordsizes(e.g. 24 bits, 36 bits, 40 bits, 48 bits, 60 bits) and their memory could generally only be addressed to the word, no smaller sized entity could be addressed.business - these computers addressed memory by characters (e.g. 6 bits), if they supported the concept of words at all the machine usually had a variable wordlength that the programmer could specify in someway according to the needs of the program. Their memory was addressed to the character.This was true for both first and second generation computers, but in the third generation computer manufacturers decided to unify the 2 categories of computers to reduce the number of different architectures they had to support. IBM with the introduction of the System/360 in 1964 introduced the concept of thebyte (8 bits) as an independently addressable part of a large fixed word (32 bits). Other computer manufacturers soon followed this practice too.