An integer is known as an "int". It is defined to be at least able to hold values ranging from -32767 to 32767 (signed) or 0 to 65535 (unsigned). This is only the minimum range, however, and they are commonly larger. For 32-bit C programs, they will usually be in the range 2^32, and for 64-bit C programs, they may be in the 2^64 range, but this is compiler-dependent. A developer should only assume that an int is capable of holding a value with the specified minimum range, unless the code checks first to see what the actual ranges are.
1500
2
signed integer means that it has a sigh (+ or -). Using another words you say that signed variable can be positive as well as negative. unsigned variables can be only positive.
A 32 bit integer.
It depends on the context. Each database and computer language define an "integer". In the C language an integer is defined by the hardware. It can vary from 2 to 8 bytes or more.
To specify the return-type of the function.
To define any integer type value.
By the range of values you wish to represent.
Use %o
Well, uh, const unsigned int and const signed int..
It is easy to tell with function printf:int unknown_value;...printf ("unknown value is %d\n", unknown_value);Note: the typical value-range for type integer is -32768..32767 (16 bits), or -2147483648..2147483647 (32 bits).
Use an enum if you are using a c style language. Or a map data structure. Assign each integer an English value and then match it to what the user inputs.