An integer is known as an "int". It is defined to be at least able to hold values ranging from -32767 to 32767 (signed) or 0 to 65535 (unsigned). This is only the minimum range, however, and they are commonly larger. For 32-bit C programs, they will usually be in the range 2^32, and for 64-bit C programs, they may be in the 2^64 range, but this is compiler-dependent. A developer should only assume that an int is capable of holding a value with the specified minimum range, unless the code checks first to see what the actual ranges are.
1500
2
signed integer means that it has a sigh (+ or -). Using another words you say that signed variable can be positive as well as negative. unsigned variables can be only positive.
A 32 bit integer.
To specify the return-type of the function.
To define any integer type value.
By the range of values you wish to represent.
It depends on the context. Each database and computer language define an "integer". In the C language an integer is defined by the hardware. It can vary from 2 to 8 bytes or more.
Use %o
Well, uh, const unsigned int and const signed int..
the size of an integer is determaind by using the function "sizeof(c)",here 'c' is any integer.
Use an enum if you are using a c style language. Or a map data structure. Assign each integer an English value and then match it to what the user inputs.