answersLogoWhite

0


Best Answer

First off, most computers use IEE 754, not excess 64 format. IBM System/360 computers use Base-16 Excess 64 format. If you're running a standard desktop computer chances are your computer uses IEEE 754.

In Base-16 Excess 64 - for single precision numbers, like floats - we use 4 bytes, or 32 bits. The first bit is the sign bit, followed by a 7 bit exponent, followed by the mantissa (or the significand). I assume you know how to convert numbers into binary. Let's convert 2.25 into excess 64.

1) Get the sign bit.

2.25 is positive so it's 0.

2) Write the number in binary.

2.25 = 10.01B

3) Move the radix (the decimal) in powers of 16 (4 bits - nibbles) until the number is less than 1 and greater than .03125 (or .00001B). In other words, we want the radix as close to the first 1 as possible, but the number has to be less than 1. When the number is stored like this it is said to be normalized.*

10.01 = .001001 * 161

4) Remove the decimal and add zeros to the end of the number until there are 24 bits. This is the mantissa.

0010 0100 0000 0000 0000 0000

5) Since this is excess 64 (the bias is 64), we add the exponent from step 3 to 64 and convert that to binary, which will be the exponent. In this case, the exponent of 16 was 1.

64 + 1 = 65 = 1000001B

6) Now we put it all together. Sign bit, then exponent, then mantissa.

0 1000001 001001000000000000000000

s e m

or in hex:

0x41 0x24 0x00 0x00

*Step 3 is a bit hard to follow, so here are more examples. If the number is .03125 (.00001B) we would move the radix right one nibble, so it would become .1 * 16-1. If the number was 37.5 (100101.1B) it would become .001001011 * 162

IEEE 754

As I mentioned in the beginning, real numbers are usually stored in IEEE 754, not excess 64. In IEEE 754 - for single precision, like floats - 1 bit is used for the sign bit, 8 bits are used for the exponent, and the last 23 bits are used for the mantissa. Also, in IEEE 754 the number is said to be normalized when it is between 1 and 2, the bias is 127 (as opposed to excess 64 where the bias is 64), and it is base-2. Let's convert 17.125.

1) Find the sign bit.

17.125 is positive so the sign bit is 0.

2) Write the number in binary.

17.125 = 10001.001B

3) Move the radix (the decimal) in powers of 2 (1 bit) until the number is between 1 and 2. When the number is stored like this it is said to be normalized.*

10001.001 = 1.0001001 * 24

4) The number is normalized (it is between 1 and 2), so the leading 0 is dropped. Only the floating portion is retained. Remove the decimal point and add zeros to the right until there are 23 bits. This is the mantissa.

0001 0010 0000 0000 0000 000

5) Add the exponent from step 3 (24 so the exponent is 4) to the bias, 127, and convert it to binary. This is the exponent.

127 + 4 = 131 = 10000011B

6) Now just put it all together. Sign bit, then exponent, then mantissa.

0 10000011 00010010000000000000000

s e m

or in hex:

0x41 0x89 0x00 0x00

*Step 3 is hard to follow, so here are two more examples of normailizing numbers:

12.5 = 1100.1B = 1.1001 * 23.

.125 = .001B = 1 * 2-3

Something to remember is that in memory numbers are stored low order first, then high order, so if you have a look at your memory, you will see the numbers in reverse. Here's a C++ code snippet to show that it works:

typedef unsigned char BYTE;

float fl = 17.125;

BYTE* pFl = (BYTE*)&fl;

for (int i= sizeof(float) - 1; i >= 0; i--)

cout << hex << "0x" << (unsigned)pFl[i] << ' ';

cout << endl;

outputs 0x41 0x89 0x00 0x00

User Avatar

Wiki User

15y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: How does Excess 64 and IEEE 754 floating point binary work?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about General History

Did Philip Emeagwali win a Nobel Prize for inventing the Internet and the computer?

No, Philip Emeagwali did not invent either the internet or the computer and has never won a Nobel Prize. He is a Nigerian-born engineer and computer scientist/geologist who was one of two winners of the 1989 Gordon Bell Prize, a prize from the IEEE, for his use of a Connection Machine supercomputer to help analyze petroleum fields. While Emeagwali claims a long list of inventions, they have been all refuted by media agencies. He has been accused of self-promotion and misrepresentation. His application for a PhD was rejected by the University of Michigan.


When was the chip invented?

Memaybe Texas instruments inc.Both TI and Fairchild:Jack Kilby at TI did it with Germanium in 1958, but the process required hand wiring the individual components on the chip under a microscope. This was very expensive.Robert Noyce at Fairchild did it with Silicon in 1959, their "planar" process allowed even the wiring to be integrated. It did not take long before everyone (including TI) switched to the Fairchild "planar" process.


Who invented tr switch in radar?

Probably invented in WW2, so concealed from view till war was over. Tyrell of Bell labs published a paper in 1947 in IEEE journal on it. Also called Magic Tee..For the curious, the problem is that you have a radar waveguide through which you transmit say 1kW pulses off your dish antenna. The signal travels out, is reflected from the target, and returns to our microwave dish and is fed to the receiver.But you don't want the 1kW output pulse to appear on the input of the sensitive receiver, for that will burn it out; or at least have deleterious effect.The magic tee is an assembly of 1/4 wave and 1/2 wave pieces of waveguide such that while the transmitter is operating, the receiver has a short across the input. But due to the magic of wavelength reflectors, when the wave reflected from the target reaches the magic tee, is sees an open path to the receiver input. Commonly called magic tee, for you can look straight through the wave guide. Also called Transmit Receive (TR) switch


What group did not fight against the State of Israel in the Six Day War?

The list is long and diverse. It includes all of the following and, believe it or not,also many more others as well:National Rifle Association (NRA)Federated States of MicronesiaAmerican Automobile Association (AAA)National Society of Professional EngineersAFL/CIOTrinidad and TobagoAmerican Jewish Congress (AJC)Farmers Insurance GroupAirline Pilots' AssociationAmerican Radio Relay League (ARRL)National Basketball Association (NBA)Institute of Electrical and Electronic Engineers (IEEE)McClatchy GroupTribune GroupOrganization of American States (OAS)Rabbinical Council of America (RCA)North Atlantic Treaty Organization (NATO)Orthodox Union (OU)Chicago Public School (CPSThe Marshall IslandsSvalbard and Jan MayenBoy Scouts of AmericaAARPThe Church of Latter Day Saints (Mormons)League of NationsACLUDARGOPNLRBIAEAFCCVFWNational Society of Professional Engineers (I'm quite well aware that this item is a repeat on the list. The NSPE, of which this contributor is a proud former member, to its credit ... possibly due to this contributor's influence, I don't know ... has the distinction that it has not fought against the State of Israel twice.)Grand Army of the ConfederacyInternational Olympic Committee (IOC)American Petroleum Institute (API)Amoco Production CompanyAmerican Medical Association (AMA)IBEW (especially Local 110)National Collegiate Athletic Association (NCAA)Columbia Broadcasting System (CBS)Jewish Telegraphic AgencyAmerican National Standards Institute (ANSI)Intel Corporation


Related questions

What is a float value?

A value of float or floating point type represents a real number coded in a form of scientific notation. Depending on the computer it may be a binary coded form of scientific notation or a binary coded decimal (BCD) form of scientific notation, there are a nearly infinite number of ways of coding floating point but most computers today have standardized on the IEEE floating point specifications (e.g. IEEE 754, IEEE 854, ISO/IEC/IEEE 60559).


How do you express 5 in IEEE 32-bit floating-point format?

01000000101000000000000000000000


How do you represent floating point number in microprocessor?

It is somewhat complicated (search for the IEEE floating-point representation for more details), but the basic idea is that you have a few bits for the base, and a few bits for the exponent. The numbers are stored in binary, not in decimal, so the base and the exponent are the numbers "a" and "b" in a x 2b.


What are the three parts of a floating-point number?

Assuming you're asking about IEEE-754 floating-point numbers, then the three parts are base, digits, and exponent.


How are floating point numbers handled as binary numbers?

Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).Floating point numbers are typically stored as numbers in scientific notation, but in base 2. A certain number of bits represent the mantissa, other bits represent the exponent. - This is a highly simplified explanation; there are several complications in the IEEE floating point format (or other similar formats).


What is IEEE standard in c plus plus?

The format of floating-point numbers. On some platforms.


What is the utility of floating point representation of numbers?

A floating point number is, in normal mathematical terms, a real number. It's of the form: 1.0, 64.369, -55.5555555, and so forth. It basically means that the number can have a number a digits after a decimal point.


How many decimal digits can be obtained for precision from the IEEE Standard 32 bit floating point representation?

Firstly, IEEE is not a standard, it is an organisation (the Institute of Electrical and Electronics Engineers). The IEEE Standards Organisation is responsible for the standardisation activities of the IEEE. As such, there are many IEEE standards.There are two official IEEE standards covering 32-bit binary values:IEEE 754-1985 (single)IEEE 754-2008 (binary32)IEEE 754-2008 single-precision binary floating-point format: binary32The high-order bit always denotes the sign (0 for positive, 1 for negative).The next 8 bits denote the exponent. This can either be notated in twos-complement (-128 to +127) or 127-biased (0 to 255). IEEE 754-2008 (binary32) uses the 127-biased form.The low-order 23 bits denote the normalised mantissa. There's actually 24 bits in the mantissa but the high-order bit is always 1 and can therefore be implied rather than stored.The decimal precision that can be obtained from an IEEE 754-2008 (binary32) value is usually in the order of 6 to 9 digits of precision, depending on the implementation.


What is the mantissa of a floating point number?

The mantissa - also known as a significand or coefficient - is the part of a floating-point number which contains the significant digits of that number. In the common IEEE 754 floating point standard, the mantissa is represented by 53 bits of a 64-bit value (double) and 24 bits of a 32-bit value (single).


What has the author Arunkumar V Rajanala written?

Arunkumar V. Rajanala has written: 'IEEE 754 single precision standard compatible floating point processor implemented using silicon compiler technology' -- subject(s): Floating-point arithmetic, Microprocessors


Discuss the different formats of floating point numbers?

You can read some details in the Wikipedia article "floating point", especially the "History" section. It isn't worthwhile to copy large amounts of this text here. Nowadays, the most commonly used format is the IEEE 754 format.


How many bits are used in double precision floating point format number representation?

Depends on the format IEEE double precision floating point is 64 bits. But all sorts of other sizes have been used IBM 7094 double precision floating point was 72 bits CDC 6600 double precision floating point was 120 bits Sperry UNIVAC 1110 double precision floating point was 72 bits the DEC VAX had about half a dozen different floating point formats varying from 32 bits to 128 bits the IBM 1620 had floating point sizes from 4 decimal digits to 102 decimal digits (yes digits not bits).