Octal (base 8) and hexadecimal (base 16) are simply shorthand notations for binary sequences. We can actually use any base that is a power of 2 (including base 4, base 32, base 64, and so on) as a shorthand for binary, but we use octal and hexadecimal because they are fairly easy to work with.
If we look at base 4 first, we can better understand the relationship between binary and all other bases that are a power of 2. Base 4 only uses 4 symbols, 0, 1, 2 and 3. In binary these would be represented by 00, 01, 10 and 11 respectively. Thus a single base 4 digit can be used in place of every two binary digits, essentially halving the length of any binary sequence.
Base 8 (octal) uses 8 symbols, 0, 1, 2, 3, 4, 5, 6 and 7. In binary that is 000, 001, 010, 011, 100, 101, 110 and 111. Thus a single octal digit can be used in place of every 3 binary digits, reducing the length of a binary sequence by two-thirds.
Similarly, base 16 (hexadecimal) uses 16 digits, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e and f, and each digit can represent 4 binary digits. By the same token a single base 32 digit represents 5 binary digits, while a single base 64 digit represents 6 binary digits, and so on.
Since octal notation splits a binary sequence into groups of three bits, it is ideal when the number of bits is a multiple of 3, such as in a 24-bit system. That is, a 24-bit sequence (with 24 digits), can be reduced to an octal sequence of just 8 digits.
And since hexadecimal notation splits a binary sequence into groups of 4 bits, it is ideal when the number of bits is a multiple of 4, such as in a 32-bit system. Again, a 32-bit sequence (with 32 digits) can be reduced to a hexadecimal sequence of just 8 digits.
We predominantly use hexadecimal as a shorthand notation for binary numbers because a byte is typically 8-bits long, thus we only need two hexadecimal digits to represent all possible values in a byte. A single hexadecimal digit therefore represents half a byte, which we affectionately call a nybble.
You may well ask why we don't just use decimal notation as a shorthand for binary sequences and save us humans all the hassle of translating altogether. After all, if computers can be programmed to translate hexadecimal and octal notation into their native binary, then surely they can also be programmed to do the same for decimal. In actual fact they can and do. But when presented with a long sequence of binary such as:
01111101010011010110010011110100
it's much easier to split the sequence into groups of 4 digits and assign a single hex digit to each than it is to work out what the decimal equivalent would be. In this case the binary number splits as follows:
0111 1101 0100 1101 0110 0100 1111 0100
Then we can assign each group its hexadecimal equivalent:
7 D 4 D 6 4 F 4
Thus the hexadecimal value is 7D4D64F4.
In decimal we would have to write the value 2,102,224,116 instead. While this is only 2 digits longer than the hexadecimal (if we remove the comma separators), it takes a lot longer to work it out (I cheated and used a calculator). Hexadecimal is not only a much simpler conversion (you can easily do it in your head), it also works in reverse to reveal the original binary value. Remember that we don't really care what the decimal value is -- all we're really interested in is the binary value and how we can notate it as quickly as possible using as few symbols as possible. And counting up to 16 is really not much more difficult than counting up to 10.
Although it can take a bit of practice getting used to hexadecimal notation, it quickly becomes second nature, and you'll soon be counting in all sorts of bases besides base 10. You already do, in fact. The 24-hour clock is intrinsically sexagesimal, so you've been marking time in base 60 all this time (pun intended) without even realising it. The only reason it is base 60 is because it is the lowest number that is evenly divisible by 2, 3, 4, 5 and 6. The ancient Sumerians knew this 5,000 years ago. It's also the reason why circles have exactly 360 degrees.
Bytes can be written using hexadecimal, octal or decimal notation. A numeral with no prefix is always regarded as decimal. If prefixed with a leading zero it is deemed octal and if prefixed with 0x it is deemed hexadecimal. The following shows the three ways to write the decimal value 255: 255 (decimal) 0377 (octal) 0xff (hexadecimal) Hexadecimal is the generally the most convenient method of notation since each hexadecimal digit represents exactly 4-bits (a half byte or a nybble). An octal digit represents exactly 3 bits and is useful for notating bytes in groups of 3 bits. A 24-bit integer is both a multiple of 3 and 4 so it can be notated using 8 octal digits or 6 hexadecimal digits. Individual bytes are best stored using the uint8_t alias (defined in the <cstdint> standard library header) as this guarantees an 8-bit byte in the positive range 0 to 255 decimal. To store several contiguous bytes, use a vector of uint8_t: std::vector<uint8_t> bytes; bytes.push_back (255); bytes.push_back (0377); bytes.push_back (0xff); The above example pushes the value 255 onto the back of the vector three times, using decimal, octal and hexadecimal notation. You can also write contiguous bytes in multiples of 2, 4 and 8 bytes using uint16_t, uint32_t and uint64_t aliases respectively. Thus if you need a 64-bit value, use the uint64_t alias. uint64_t word = 0xffffffffffffffff; // maximum value
All bases that are themselves a power of two (2, 4, 8, 16, 32, etc) are useful when notating binary values because the conversion is so trivial. Normally we use base-16 (hexadecimal) because we usually work with 4-bit nybbles (half-bytes). However, sometimes we want to work with 3-bit nybbles, such as when working with 9-bit bytes, or perhaps 21-bit words. For this we use base-8 (octal) notation because any 3-bit grouping can be represented by just one octal digit: Octal = Binary 0 = 000 1 = 001 2 = 010 3 = 011 4 = 100 5 = 101 6 = 110 7 = 111 Octal notation varies from one programming language to another. Some append the letter 'o' while others might prefix a leading zero. So the 9-bit binary value 111100001 could be notated as 741o or 0741 depending on the language. By contrast, hexadecimal notation for the same value would be 0x1E1 or 1E1h. You might ask why we don't just use hexadecimal notation for all binary values. We can certainly do that, but when we want to make it clear that we're specifically dealing with 3-bit groupings rather 4-bit groupings, it is best to use octal. In this case the hexadecimal notation implies a 12-bit value which could lead to confusion in our code. Whenever possible, we must strive to express our ideas directly in code.
Is possible. It means hexadecimal, so 0x33 = 3*16 + 3 = 51
No - octal numbers use only the digits 0-7.
Any base that is itself a power of 2 is easily converted to and from binary. With base 4, each digit represents 2 bits. With base 8 (octal), each digit represents 3 bits. And with base 16 (hexadecimal), each digit represents 4 bits. Thus two hexadecimal digits represent an 8-bit binary value. This is convenient because we typically refer to a unit of computer memory as an 8-bit byte, thus every byte value can be represented using just 2 hex digits. If we had a system with a 9-bit byte we'd use 3 octal digits instead. A 24-bit value can either be represented using 6 hex digits or 8 octal digits. To convert a hexadecimal value to binary, we simply consult the following table (note that 0x is the conventional prefix for a hexadecimal value): hex = binary 0x0 = 0000 0x1 = 0001 0x2 = 0010 0x3 = 0011 0x4 = 0100 0x5 = 0101 0x6 = 0110 0x7 = 0111 0x8 = 1000 0x9 = 1001 0xA = 1010 0xB = 1011 0xC = 1100 0xD = 1101 0xE = 1110 0xF = 1111 Here, hexadecimal digit 0xF has the binary value 1111, thus 0xFF would be 11111111. Note that the bit patterns are in the same order as the hexadecimal digits. Thus 0x0F becomes 00001111 and 0xF0 becomes 11110000. Knowing this, we can easily convert binary values into hexadecimal, we simply divide the binary value into groups of 4 bits and convert each group to the corresponding hex digit. Thus 101101001100 becomes B4C (1011=B, 0100=4 and 1100=C). If there aren't enough bits, we simply pad the first group with leading zeroes. We can use a similar technique to convert between octal and binary, we simply divide the bits into groups of 3: octal = binary 00 = 000 01 = 001 02 = 010 03 = 011 04 = 100 05 = 101 06 = 110 07 = 111 Note that a leading 0 is the conventional prefix for octal values. Thus binary value 100010 would be written 042 in octal to avoid confusion with 42 decimal.
Most assemblers support binary, decimal, hexadecimal and octal notations.
On computers.
The first use of the term hexadecimal dates to 1954. It is unclear who invented the current hexadecimal notation - most likely IBM. Not all computers used hexadecimal until the end of the 70s or later. Hewlett-Packard continued to use octal instead of hexadecimal until after 1980.
If you mean, for example, divide one hexadecimal number by another: In any number base, you can use basically the same method you use with decimal numbers - in the case of division, the "long division". However, you have to use the corresponding multiplication table, for example, the multiplication table for multiplying two hexadecimal digits, with a hexadecimal result.
Inside the computer everything operates with the notion of on or off. With this in mind, the use of binary is really self-explanatory as it consists of only 1s and 0s. Hexadecimal, like octal, works very well also as they're both based on powers of 2 (8 and 16) it's easy to use these as forms of shorthand for binary.
Hexadecimal and octal systems are used primarily in computing and programming because they provide a more compact representation of binary data. Hexadecimal (base 16) simplifies the representation of binary values, allowing four binary digits to be represented by a single hexadecimal digit, making it easier for humans to read and understand. Octal (base 8), while less common today, was traditionally used in computing due to its straightforward conversion from binary, grouping bits into sets of three. Both systems help streamline coding, debugging, and memory addressing processes.
Because the octal number sytem is more useful for writing and clearer to read. Also, we're only using the binary system since the invention of computers which is not that long ago. Before that, there was no reason to use a binary system which is again not easy to read.
Memory dump which are in binary numbers would have many numbers of 0s and 1s. working with these numbers would be very difficult. Hence two number system hexadecimal and octal number system is used because these numbers are inter convertible with binary numbers by the concept of bits.
Hexadecimal counts to the base 16 and octal counts to base 8 and in computers the pasterns of 1s and 0s are grouped into bits (1) bytes (1111111) and words (1111111111111111). Thus to be able to express a complete pattern for a byte or a word it is useful to use base 8 or base 16 counting.
Computer engineers use to use the hexadecimal code to program computers, or the base 16. Hexadecimal numbers use the digits 0 through 9, plus the letters A through F to represent the digits 10 through 15.
Use %o
Because - the hexadecimal system (base 16) is capable of storing any alpha-numeric character in two 'bits' - using less memory. the octal system would use twice as much space to store the same information.