That depends what encoding is used. One common (fairly old) encoding is ASCII; that one uses one byte for each character (letter, symbol, space, etc.). Some systems use 2 bytes per character. Many modern systems use Unicode; if the Unicode characters are stored as UTF-16 - a fairly common encoding scheme - many common characters will still use a single byte, while many special symbols (for example, accented characters) will take up two bytes. The number of bits is simply the number of bytes multiplied by 8.
A character in ASCII format requires only one byte and a character in UNICODE requires 2 bytes.
ASCII = 7 bit Unicode = 16 bits UTF-8 =8 bit
in java, char consumes two bytes because it uses unicode instead of ascii. and int takes 4 bytes because 32-bit no will be taken
Depends on what you refer to as Unicode. Typically the ones you will see is UTF-8 which uses from up to one to three bytes per character (the two or three-byte characters are usually for characters used in various other languages that are not already covered under the ASCII codepage). Otherwise, the convention states that Unicode is UTF-16.
In ASCII, EBCDIC, FIELDATA, etc. yes. However Unicode characters are composed of multiple bytes.
That depends on your situation. If you have a Unicode-encoded file that you wish to read, you can try to open it with a Unicode-enabled editor, such as SC Unipad (http://www.unipad.org/main/). == ==
The number of bytes used by a character varies from language to language. Java uses a 16-bit (two-byte) character so that it can represent many non-Latin characters in the Unicode character set.
only uses one byte (8 bits) to encode English characters uses two bytes (16 bits) to encode the most commonly used characters. uses four bytes (32 bits) to encode the characters.
Depending on which system you use, it either contains 24,576 bytes, or 24,000 bytes.
Different languages use different size types for different reasons. In this case, the difference is between ASCII and Unicode. Java characters use 2-bytes to store a Unicode character so as to allow a wider variety of characters in strings, whereas C, at least by default, only uses 1 byte to store a character.
No. Unicode includes (or has the capability to include) every language on Earth, including English.