answersLogoWhite

0

To determine the range of an integer data type you first need to know the size of the integer, in bytes. This is implementation-dependant, so never assume it will always be 4 bytes. On a 64-bit system, it could be 4 or 8.

To accurately determine the size of any datatype, always use the sizeof() operator. If you don't, then you are guaranteed to run into major problems when porting programs between compilers and architectures.

Once you have the size, determining the range is fairly easy. The following program contains two function overloads to determine the range of an int and an unsigned int, using bitwise shift operations. The main function tests both functions, showing the ranges in both decimal and hexadecimal (example output is shown below).

#include <iostream>

#include <iomanip>

size_t GetRange(const int& i, int& iMin, int& iMax )

{

int sizeInBits = sizeof(i) * 8;

iMin = 1;

iMin <<= --sizeInBits;

iMax = 1;

while(--sizeInBits)

{

iMax <<= 1;

iMax |= 1;

}

return(sizeof(i));

}

size_t GetRange(const unsigned int& u, unsigned int& uMin, unsigned int& uMax )

{

unsigned int sizeInBits = sizeof(u) * 8;

uMin = 0;

uMax = 1;

while(--sizeInBits)

{

uMax <<= 1;

uMax |= 1;

}

return(sizeof(u));

}

int main()

{

using namespace std;

const int w=17, z=12;

int x, iMin, iMax;

unsigned int y, uMin, uMax;

size_t iSize=GetRange(x, iMin, iMax);

size_t uSize=GetRange(y, uMin, uMax);

cout<<setw(z)<<"Type"<<setw(w)<<"int (x)"<<setw(w)<<"unsigned int (y)"<<endl;

cout<<setw(z)<<"Size (bytes)"<<setw(w)<<iSize<<setw(w)<<uSize<<endl;

cout<<setbase(10);

cout<<setw(z)<<"Min (dec)"<<setw(w)<<iMin<<setw(w)<<uMin<<endl;

cout<<setw(z)<<"Max (dec)"<<setw(w)<<iMax<<setw(w)<<uMax<<endl;

cout<<setbase(16);

cout<<setw(z)<<"Min (hex)"<<setw(w)<<iMin<<setw(w)<<uMin<<endl;

cout<<setw(z)<<"Max (hex)"<<setw(w)<<iMax<<setw(w)<<uMax<<endl;

cout<<endl<<endl;

return(0);

}

Output:

Typeint (x)unsigned int (y)Size (bytes)44Min (dec)-21474836480Max (dec)21474836474294967295Min (hex)800000000Max (hex)7fffffffffffffff

Note the way signed integers are represented in hexadecimal. It looks odd but the binary representation of -1 in 8-bit notation is 11111111 (FFh), not 10000001 (81h) as you might expect. It makes more sense when you realise 11111111+00000001=00000000 which, in decimal, would be -1+1=0.

User Avatar

Wiki User

12y ago

What else can I help you with?