What would you like to do?
What is the difference between 8 bit 16 bit and 32 bit processor?
The internal parts like registers and ALU use data in 8-bits in a 8 nit processor and so in a 16 bit and 32 bit processor the use 32 bit data.....the BUS which carries data from one part of a proceesor to another has 8-lines in a 8 bit processor,16 lines in a 16 bit and 32lines ina 32 bit one.....because all these bits have to be transmitted simultaneously.
6 people found this useful
Was this answer useful?
Thanks for the feedback!
there are lot of difference few of them for example if you if you have windows xp with 64 it can supports 4 gb ram but 32 bit cant. and also with graphics card 64 bit work…s well and it mainly supports for high resolution system.
Most modern operating systems are either 32 bit or 64 bit but it mostly has to do with memory coding. 32bit can utilize a maximum or 4gb or RAM due to the amount of nu…mbers it uses to identify a location on the RAM (32 numbers). 64 bit can use far more memory (over 128gb) while 16 bit can use far less (not sure of exact numbers) Anything you buy today should be at least 32bit if not 64 bit
32 bit means the number of address lines are 32. it can point to 2^32(2 to the power of 32) address locations in memory. the 32 bit refers that a 32 bit of inf…ormation can be executed simultaneously that is copying or reading writing takes 32 bit of data at a time. the system bus contains 32 lines of wires to hold the data. while the 64 bit represents the number of address lines are 64 and data can handle 64 bit at once same as 32 bit. all present using PCs are 64 bit.
There are a few differences between a 32 bit and 64 bit laptop. A laptop needs a place to store the information it needs to access the quickest, and a 64 bit laptop can hold …more information than a 32 bit, allowing for faster access.
The terms 32-bit and 64-bit refer to the way a computer's processor (also called aCPU), handles information. The 64-bit version of Windows handles large amounts of rando…m access memory (RAM) more effectively than a 32-bit system.
A better question would be - What is the difference between 16bit and 24bit color? Colour is usually represented on computers, & displayed using 3 colour elements - Red…, Green & Blue. 16bit colour (known as "high colour") refers to the fact that only a total of 16 binary bits are used to represent each colour. This usually results in 5 bits being used for red, 5 bits for blue, and 6 bits for green (due to the fact that we are apparently more visually sensitive to green) This gives 32 shades of red and blue available, and 64 shades of green. This results in 65536 different possible shades. 24bit colour (known as "true colour") results in 8 bits being used for each colour, allowing 256 shades of each, & a total of 4.2 Million colours. Pretty much all digital displays use 24bit colour & consider it to be the full or "true" colour mode. On a home computer 32bit means 24bit is used for red, green & blue. The extra 8 bits are usually used for storing internal information (like transparency or stencil information). 32bit is often used because memory can be read easier in steps of 32bits (4 bytes), than it can in 24bits (3 bytes). Visually they will look identical on your monitor. 24bit colour will look visually "better" than 16bit colour, when looking @ high colour images like photography, or smooth gradients of colour. In 16bit colour you can often see the "steps" as the colour transitions from one to the next. In 24bit the colours are so close together that they are often indiscernible, & on most digital monitors, to most eyes, will appear perfectly smooth. Most modern computer games require your graphics card to be capable of displaying 24bit colour & will no longer support 16bit.
The length of commands a microprocessor can address per clock cycle is in part measured by the bit-rate and the operations per cycle. In your case, the bit rate, the literal s…tring length of commands is addressed: Falling back on my Bus width answer, the elemental difference between a 16-bit and 32-bit processor is pretty easy when drawn out for you. I'll use the band name "ABBA" for an example. A 16-bit processor would run the string "ABBA" like this: Receive command Instruction #1 ; (This is "A" "B" in binary) Instruction #2 ; (This is "B" "A" in binary) Output A 32-bit processor does lit like this: Receive commandInstruction #1 ;;; ("ABBA" in Binary) OutputA lot more goes into the actual process than this, and a 32-bit processor isn't necessarily twice as fast at the same clock speed as a 16-bit processor. In fact, real world differences are quite small, due to some under-the-hood architectural differences between 16, 32, and 64 bit processors. Just keep in mind a lower-bit processor isn't compatible with a higher-bit program, but most higher-bit processors ARE compatible with lower-bit programs.
n-bit microprocessor means how many bit can a microprocessor can process i.e as the operations.8bit means that microprocessor can process only 8bit addition , subtraction etc …as in case.16 bit can process data of 16bit for any kind of operation to be performed.naturally the greater the number of bits to be processed the good it is. an eg of 8bit-8085 an eg of 16bit-8086 both if intel Anand bhat(mca@kiit-870024)
Bus width says how many bits can be moved around at the same time. Think of the bus as a highway that connects the processor (CPU) to memory (RAM), and the bus width… (ex. 32 or 64 bits) as the number of lanes. A wider bus (ex. 64-bit) lets you move more data in the same time.
16-bit channel stands for Deep Color, it means that every pixel of image have one of 230-48 (1'073'741'824-281'474'976'710'656) of colors (MUCH more than our eye able to perce…ive). As for 8 bit (TrueColor), there are 224 (16'777'216) of colors - optimal number for full-range color expression, as for the image file size. Anyway, TrueColor is the final aim of any process that Deep Color images are used in.
16 bit compilers compile the program into 16-bit machine code that will run on a computer with a 16-bit processor. 16-bit machine code will run on a 32-bit processor, but 32-b…it machine code will not run on a 16-bit processor. 32-bit machine code is usually faster than 16-bit machine code. -DJ Craig Note With 16 bit compiler the type-sizes (in bits) are the following: short, int: 16 long: 32 long long: (no such type) pointer: 16/32 (but even 32 means only 1MB address-space on 8086) With 32 bit compiler the object-sizes (in bits) are the following: short: 16 int, long: 32 long long: 64 pointer: 32 With 64 bit compiler the object-sizes (in bits) are the following: short: 16 int: 32 long: 32 or 64 (!) long long: 64 pointer: 64 [While the above values are generally correct, they may vary for specific Operating Systems. Please check your compiler's documentation for the default sizes of standard types] Note: C language itself doesn't say anything about "16 bit compilers" and "32 bit compilers"
The fundamental difference between a 32-bit and 64-bit microprocessor is what their names suggest: the size of the basic integer operations, also called the 'native' size of a… CPU's calculations. The native size of a CPU determines a whole bunch of related characteristics. For instance, all integer calculations are done using the native size; this matters in terms of performance for several reasons: if you add two integers smaller than the native size, it requires only a single operation.if you add two integers larger than the native size, you must perform 3 operations (add the upper values, add the lower values, then combine). For instance, if you wanted to add two 20-bit numbers, on both the 32-bit CPU and 64-bit CPU it would require a single operation. However, if you wanted to add two 40-bit numbers, it would require only 1 operation on a 64-bit CPU, but 3 operations on a 32-bit CPU. The native size of a CPU also determines things like the maximum addressable memory - thus, a 32-bit CPU can address up to 2^32 = 4GB of memory, while a 64-bit system can address up to 16 Exabytes. It also determines the minimum size of information that has to be processed - when fetching information from caches and memory, no operation can be done with information less than the native size. Thus, 64-bit CPUs are more demanding on memory subsystems, as they need to process information in 64-bit chunks, rather than 32-bit ones.
16-bit Windows applications were designed to run under Windows 3.0 and 3.1, while 32-bit Windows applications were designed for Windows 95, 98, NT, and 2000. They are written… to two different Application Program Interfaces (APIs) called "Win16" and "Win32". The main differences between the Win16 and Win32 APIs are:Memory model: Win16 uses a segmented memory model (each memory address is referred to using a segment address, and the offset within that segment), while Win32 uses a flat 32-bit address space.Multitasking: Win16 uses cooperative multitasking. This means that the application must relinquish control before another application or program can run. Win32 uses preemptive multitasking, in which the operating system (Windows NT, 95, 98, or 2000) assigns time slices to each process.Multithreading: Unlike Win16, Win32 supports multithreading. This means that each program is broken up into many threads, which can run simultaneously. Windows 3.1 and Windows for Workgroups 3.11 can run a small subset of Win32 applications, mostly older ones, by using a subsystem called "Win32s". Win32s translates Win32 system calls to Win16. This process is called "thunking". Windows 95, 98, NT, and 2000 can run Win16 applications by running them cooperatively in a Win16 compatibility box. (In the case of Windows NT, this is called "WOW" - Windows on Windows). If a 32-bit application crashes, it will not affect any other 32-bit or 16-bit applications. However, if a 16-bit application crashes, it might affect other 16-bit applications (but not 32-bit applications). Both APIs contain the mechanisms used to link applications and documents together (e.g., OLE and OLE2).
This refers to how the CPU processors the information, 32 bit is more current than 16 bit and much faster. 16 bit is obsolete because we not gave 64 bit systems.
In Plural Nouns
the number of bits describing the architecture of a CPU system define its maximum "word size", or how many bits it can process at one time. In an 8-bit system, a standard inte…ger can hold a value of up to +127, and an unsigned integer can hold a value up to +255. In a 16-bit system, unsigned integers can hold values up to 2^16 - 1, or 65,535. When speaking of 8-bit and 16-bit color, for 8-bit color you can either mean the palette size (256 colors, each of which can be assigned any color) where only 256 unique colors can be used at a time. The other meaning of 8-bit color is a different 256-color system (called 8-bit truecolor) where you define the color by attributing 3 bits to red, 3 to green, and 2 to blue. This system is less effective than the 8-bit palette system. 16-bit color (also called high-color or 16-bit truecolor) allows a total of 65,536 unique colors to be displayed, anywhere on screen at any time, by assigning 5 bits to the red and blue channels, and 6 bits to the green channel (our eyes are more sensitive to green).
8 bit computer can only display 256 colors while a 16 bit computer can display 64,000. colors.
Quality is the difference. When playing sound with 8-bit sample resolution, we have the sound wave encoded with 256 levels what is not very accurate. Analog circuits placed af…ter digital to analog converter (DAC) can smooth the wave, but it also looses some details. When 16-bit samples are used, the sound is encoded with 65536 levels. This allows to record and play music with greater accuracy. It souns then much better.