answersLogoWhite

0


Best Answer

All ASCII character sets have exactly 128 characters, thus only 7-bits are required to represent each character as an integer in the range 0 to 127 (0x00 to 0x7F). If additional bits are available (most systems use at least an 8-bit byte), all the high-order bits must be zeroed.

ANSI is similar to ASCII but uses 8-bit encodings rather than 7-bit encodings. If bit-7 (the high-order bit of an 8-bit byte) is not set (0), the 8-bit encoding typically represents one of the 128 standard ASCII character codes (0-127). If set (1), it represents a character from the extended ASCII character set (128-255). To ensure correct interpretation of the encodings, most ANSI code pages are standardised to include the standard ASCII character set, however the extended character set depends upon which ANSI code page was active during encoding and the same code page must be used during decoding. ANSI typically caters for US/UK-English characters (using ASCII) along with foreign language support, mostly European (Spanish, German, French, Italian). Languages which require more characters than can be provided by ANSI alone must use a multi-byte encoding, such as fixed-width UNICODE or variable-width UTF-8. However, these encodings are standardised such that the first 128 characters (the standard ASCII character set) have the same 7-bit representation (with all high-order bits zeroed).

User Avatar

Wiki User

6y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

12y ago

ASCII code defines (not requires) 128 characters, 95 printable, 33 control.

This answer is:
User Avatar

User Avatar

Wiki User

15y ago

Standard ascii uses 7 bits, but usually a byte (8 bits) is used.

This answer is:
User Avatar

User Avatar

Wiki User

9y ago

One byte (8-bits to be precise).

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: How many bit are used to encode an ASCII character?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Engineering

What is the difference between ascii and ebcdic?

Due to the advancement of technology and our use of computers, the importance of ASCII and EBCDIC have all but ebbed. Both were important in the process of language encoding, however ASCII used 7 bits to encode characters before being extended where EBCDIC used 8 bits for that same process. ASCII has more characters than its counterpart and its ordering of letters is linear. EBCDIC is not. There are different versions of ASCII and despite this, most are compatible to one another; due to IBMs exclusive monopolization of EBCDIC, this encoding cannot meet the standards of modern day encoding schemes, like Unicode.


Which encoder creates ASCII?

ASCII (American Standard Code for Information Interchange) is a character-encoding scheme that was standardised in 1963. There is no encoder required to create ASCII. Every machine supports it as standard, although some implement it via UNICODE. The only difference is in the number of bytes used to represent each character. The default is one byte per character yielding 128 standard encodings that map exactly with the first 128 characters in UNICODE encoding.


How is the getchar function used in a C program?

The getchar() is used in 'C' programming language because it can read the character from the Standard input(i.e..from the user keyboard),and converts in to the ASCII value.


What is the purpose of ASCII?

The American Standard Code for Information Interchange was made to standardize 128 numeric codes that represent the English letters, Symbols, and Numbers. Any USA keyboard is made with this standard in mind.


What is a function used to convert ASCII to integer?

atoi

Related questions

How ASCII is used in the computer?

ASCII is a form of character encoding, so it can be used by your computer for any task.


Where is ASCII used?

ASCII is used to determine which character to display when a keyboard key is pressed, or code entered.


What is character code used by most personal computers?

ASCII (American Standard Code for Information Interchange)


How many bits does a unicode character require?

only uses one byte (8 bits) to encode English characters uses two bytes (16 bits) to encode the most commonly used characters. uses four bytes (32 bits) to encode the characters.


What is the difference between ascii and ebcdic?

Due to the advancement of technology and our use of computers, the importance of ASCII and EBCDIC have all but ebbed. Both were important in the process of language encoding, however ASCII used 7 bits to encode characters before being extended where EBCDIC used 8 bits for that same process. ASCII has more characters than its counterpart and its ordering of letters is linear. EBCDIC is not. There are different versions of ASCII and despite this, most are compatible to one another; due to IBMs exclusive monopolization of EBCDIC, this encoding cannot meet the standards of modern day encoding schemes, like Unicode.


How many bytes are allocated to one ASCII character?

It depends on which of several coding standards you use. ANSI or ASCII uses one byte to define a character, as does BCDIC and EBCDIC. Multi-byte character sets typically have a special character that is used to indicate that the following character is from a different character set than the base one. If the character u-umlaut cannot be represented in the standard set of characters, for instance, you could use two characters, one to say the following character is special, and then the special u0umlaut character. This coding standard requires somewhere between one and two bytes to encode a character. The Unicode system is intended to support all possible characters, including Hebrew, Russian / Cyrillic, Greek, Arabic, and Chinese. As you can imagine, in order to support all these different characters, you need a lot of bits. The initial standard, U16, used two bytes per character; but this proved to be insufficient, so a new standard, U24 which uses three bytes per character, is also now available.


Except ASCII and EMBDIC are there other popular coding schemes used for the Internet application?

ASCII is very common. EMBDIC is hardly so. However, ASCII has been almost completely replaced by Unicode, which is by far the most common encoding scheme anywhere. Unicode comes with several variations (UTF-8, UTF-16, UTF-32, etc). UTF-8 is an 8-bit extension of the 7-bit ASCII coding scheme and allows the encoding of any arbitrary character available in Unicode. The different formats UTF-16 and on are primarily used to encode characters in a different language that will almost always require subsequent bytes.


New line inside the command text?

The newline character is used to mark the end of each line in Unix/Linux. Usually the character is specified as the '\n' character, which equates to a 0x0A character in Ascii based systems.


Does a character take up one byte of storage?

An ASCII character requires one byte of storage. A Unicode character requires between one and four bytes of storage, depending on the encoding format used.


Which encoder creates ASCII?

ASCII (American Standard Code for Information Interchange) is a character-encoding scheme that was standardised in 1963. There is no encoder required to create ASCII. Every machine supports it as standard, although some implement it via UNICODE. The only difference is in the number of bytes used to represent each character. The default is one byte per character yielding 128 standard encodings that map exactly with the first 128 characters in UNICODE encoding.


What are ASCII-8 Codes?

ASCII is the American Standard Code for Information Interchange.ASCII codes are the 8 bit patterns of 1's and 0's (binary numbers) that represent text in computers and other devices that use text.For instance the letters 'A' and 'a' is represented by:-A = Decimal 65, hex:0101, Octal 0x41a = Decimal 97, hex:0140, Octal 0x61With numbers :-0 = Decimal 48, hex:0060, Octal 0x309 = Decimal 57, hex:0071, Octal 0x39And this means that if you subtract the ASCII character 0 (decimal 48) from the ASCII character 9 (decimal 57) you get the number 9. This is how keyboard keystrokes which are captured as text (ASCII) are converted into numbers in the computer when you need to do sums (e.g. in a spreadsheet).There are other encodings that may also be used such as Extended Binary Coded Decimal Interchange Code (EBCDIC), but ASCII is the most common.As a point of interest, with Roman character sets (which have about 127 characters) it is possible to encode most of the characters needed using just 8 bits (there is therefore room for 255 characters). However as use of computers spread and it became necessary to provide for different scripts such as kanji, which have many more characters to encode (the Yiti Zidian dictionary published in 2004 contains 100,000 or more individual characters) the character set used in computers and by programming languages had to be extended (by grouping bit patterns) to cope. The result is that you can type using kanji utilising a computer with a Chinese/Japanese keyboard. (Although how anyone can learn/remember 100,000 different characters is beyond my comprehension!).


What are the advantages and disadvantages of EBCDIC?

EBCDIC (Extended Binary-Coded Decimal Interchange Code) is a character encoding system used by IBM mainframe computers. It is a binary code used to represent character data, and is an extension of the earlier ASCII code. EBCDIC is used primarily on IBM mainframe computers, and its variants are used on IBM midrange computers. EBCDIC has a number of advantages over ASCII. First, it is a more efficient code, requiring fewer bits to represent a character. Second, it allows for more characters to be represented, including accented characters and non-Latin alphabets. Third, it has built-in error-checking features that ASCII does not have. However, EBCDIC also has a number of disadvantages. First, it is not as widely used as ASCII, so there is less software available that can work with it. Second, it is not as easy to convert data from EBCDIC to ASCII (or vice versa) as it is with ASCII. Finally, EBCDIC is a proprietary code, developed and owned by IBM, so it cannot be used by other computer manufacturers without a license from IBM.