answersLogoWhite

0


Best Answer

A normal byte consists of 8 bits, meaning 8 0s or 1s, this means it can be anything from 00000000 to 11111111. These are just binary numbers, and binary 00000000 is normal (decimal) 0, and 11111111 is decimal 255. So one byte can be any number from 0 to 255. If you don't understand binary, please look at http://en.wikipedia.org/wiki/Binary_numeral_system

User Avatar

Wiki User

13y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

14y ago

A byte represents eight bits (b) of data, with a bit of data equating to a line of eight 1's and 0's (A.K.A binary).

The reason behind this was that early computers could only send eight bits of data at a time. Now, operating systems are being built with 64 bit architecture.

In short, a byte represents a unit of data, made up of eight bits. The next most common unit being the kilobyte, which is composed of 1024 bytes (though there are also kilobits (Kb)).

This answer is:
User Avatar

User Avatar

Wiki User

12y ago

256

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

1

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: How many different ASCII characters can a byte represent?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

Is bytes a vector or a scalar quantity?

A byte is 8 bits. They can represent all of the ASCII characters and any value between 0 and 255. It could be both. It just represents a value.


How is ASCII used to represent text in a computer system?

ASCII is a simple (and increasingly obsolete) code which maps alphanumeric characters to numbers in the 0..255 range. Thus, any phrase expressed as a series of these alphanumeric characters can be expressed as a series of bytes with the corresponding numeric values, one byte per character. For example, the letter A is represented by a byte of numerical decimal value 65. It is characteristic for the ASCII code that it supports a limited alphabet of 256 different characters. While this might seem much in light of the fact that the 26 characters cover the A-Z alphabet, codes are assigned to lower-case and upper-case characters, digits, punctuation marks, a wide range of other characters including some simple symbols, and a range of 'foreign characters.' With today's demands on localized software and support for the local alphabet, the ASCII code becomes increasingly obsolete because it cannot support a great number of non-English alphabets.


Why do need Unicode when you have Ascii?

ASCII only has 127 standard character codes and only supports the English alphabet. While you can use the extended ASCII character to provide a set of 256 characters and thus support other languages there's no guarantee that other systems will use the same code page, so the characters will not display correctly across all systems (the characters you see will depend upon which code page is currently in use). Moreover, some languages, particularly Chinese, have thousands of symbols that simply cannot be encoded in ASCII. UNICODE encoding supports all languages and the first 127 symbols are also the same as ASCII, so all characters appear the same across all systems. UTF8 is the most common UNICODE encoding in use today because it uses one-byte per character for the first 127 characters and is therefore fully compliant with non-extended ASCII. If the most-significant bit is set then the character is represented by 2 or more bytes, the combination of which maps to the UNICODE encoding.


Define byte offset?

A byte offset, typically used to index into a string or file, is a zero-based number of bytes. For example, in the string "this is a test", the byte offset of "this" is 0, of "is" is 5,"a" is 8, and "test" is 10.Note that this is not always the same as the "character offset". Some characters, such as Chinese ideograms, require two or more bytes to represent. Using ASCII characters only will ensure that the byte offset is always equal to the character offset.


What does the computer term bytes mean?

8 bits form a byte. For example to store ASCII characters. Othe language encodings need more bytes, e.g., asian languages. A single bit of course is a 0 or 1 meaning a base2 system. Hence 8 bits or a byte can represent 2 to the power of 8 combinations.


Is a byte a single character?

In ASCII, EBCDIC, FIELDATA, etc. yes. However Unicode characters are composed of multiple bytes.


How many bits are in extended ASCII byte?

An extended ASCII byte (like all bytes) contains 8 bits, or binary digits.


How many characters are in a kilabyte?

If you're referring to kilobyte, then it contains 1024 bytes and if the characters are the standard ASCII character set where 1 character is 1 byte, then a kilobyte would have 1024 characters.


How many bytes are used to represent one character?

In the UTF 8 standard of representing text, which is the most commonly used has a varying amount of bytes to represent characters. The Latin alphabet and numbers as well as commonly used characters such as (but not limited to) <, >, -, /, \, $ , !, %, @, &, ^, (, ), and *. Characters after that, however, such as accented characters and different language scripts are usually represented as 2 bytes. The most a character can use is 4, I think (Can someone verify? I can't seem to find the answer).


How many bytes does the letter S take up?

The letter S uses 1 byte of memory, as do all the other ASCII characters.


How do you convert a char to an integer in Visual Basic 2010?

A char is already an integer, so there is no conversion required. A character is simply an integer that maps to a glyph in the current code page. ASCII characters are 1 byte long and have a value in the range 0 to 127 while extended ASCII characters are in the 128 to 255 range. Wide characters (UTF16 UNICODE) characters are two bytes long and cover the range 0 to 65,535, where 0 to 127 map to the standard ASCII character set. UTF8 UNICODE characters are variable width (1 to 6 bytes in length), where 0 to 127 are single-byte characters mapping to the standard ASCII set.


What are the system requirements for language c?

1. byte-organised memory (every bytes of the memory has to be accessible) 2. support for every ASCII characters (0-127)