answersLogoWhite

0

🧩

Graphics Cards

A piece of computer hardware that transforms binary data in the CPU into the viewable picture on the monitor of a computer. It is typically comprised mainly of memory, a processor heatsink, a processor fan, and a motherboard connection.

1,554 Questions

What is the most powerful graphics card?

The nvidia geforce 8800gtx has been superseeded by the nVidia 9800GX2 as the most powerful graphics card (March 2008). It is two 9800GTX's sandwiched together with a single heatsink in the middle. This in turn makes this a very hot GPU and requires good case cooling. It is also very, very expensive.

What is grafic card and its function?

It handles all computation to do with displaying a picture on the screen. It relieves the overhead on the main CPU and memory, by having it's own.

Most motherboards have a graphics chip, but a dedicated card is an improvement, for fast 3D graphics.

AGP 9250 TD2W 256D Graphic Driver?

http://driverscollection.com/?H=Radeon%209250&By=ATI&SS=Windows%20XP

What graphics card do you need?

Eh, this question is right in line with "What size clothes do you need?" How'm i supposed to know what you need?

Most motherboards has some kind of graphics system, so to surf and read the answer is None. To play a reasonably modern game, you can get a Nvidia GeForce240GL 1G for peanut money, and it's good unless you want kickass graphics. If you do, get a more expensive card. 560 Ti is good for most, tried and tested and goes for about $200.

Yes, there's AMD cards too, but i don't know them as well. Your corner component store should know, though.

Does geforce 7050 support directx 10?

No. The GeForce 8 series was the first to support DirectX 10.

What is artificial texture?

Artificial Textures are created and designed by human beings for a specific purpose to give a sense of volume

Can you fake your video card?

This depends on what you mean by faking the video card. You cannot fake one if you have none installed at all. However, you can trick software into thinking you have a video card other than what is installed. When monochrome displays were common, some software required CGA monitors. Some monochrome adapters were Hercules compatible, but the software was not. However, there were programs you could install to allow CGA compatible software to run on Hecules monochrome compatible hardware.

Another thing that can be done in some cases is to fake a different model of video card. That often requires editing the ROM instructions in the video card and flashing them back to the card. This is risky and may also violate license agreements in some cases, but performance can sometimes be gained this way.

What is graphic RAM?

Graphics RAM is memory on the graphics card that hold information on the textures that are to be rendered in a game, along with other things. Generally 512mb of RAM is more than you will ever need. The only time you would need more is playing at the highest resolutions on screens over 24".

What do graphic card terms mean core clock memory clock texture fill rate pixel fill rate and which two of these would I prefer if I need a gaming card.Is gtx 470 better or gtx 250 s.l.i?

The pixel fill rate is how fast it makes the pixels "show up" from what i understand.

when using SLI you have the same amount of RAM as one card would have.

in SLI each card calculates every other frame and puts them together, so under 30 fps would be bad.

however, the GTX 250 uses GDDR3 which is slower than the GTX 470's GDDR5, with this info, one GTX 470 should out preform 2 GTX 250's in SLI.

also with the GTX 470 you can upgrade to SLI later for more advanced games- so it is more future proof.

also given are some benchmarks

GTX 250 SLI :http://www.xbitlabs.com/articles/video/display/gf-gts250-sli_8.html#sect0

GTX 470: http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/30321-nvidia-geforce-gtx-470-review-18.html

the benchmarks show the GTX 250 in SLI going a little faster- but you can't upgrade it in the future, leading to you needing to replace it.

the choice is up to you, it is your opinion. good luck with your gaming rig

How do you install gaphis card?

It should tell you in more detail with the instruction but first you slide open your computers case, then you insert the card into the matching slot on your computer's motherboard, then you close your computer back up and install the software.

What is the type of expansion slot is only used for graphics?

I read from a website, it says there are 3 main types which are PCI-E (Express), PCI, and AGP. But what makes me unsure about this is that I found that Wikipeida wrote there many more other types. Take a look at related links below.

//Correction

AGP is use only for graphics.

How does unicode relate to ascii?

UNICODE and ASCII are related in that they are both used to exchange information, primarily in the form of text (plain-text as opposed to typography). That is, when we want exchange the character 'A' between systems we do not transmit the entire bitmap for the glyph we simply transmit a character code. Both systems must know what each character code represents and this is achieved through a "code page" which maps individual character codes to their respective glyphs. In this way we minimise the amount of information that needs to be transmitted.

The problem is that different languages use different symbols. The letter 'A' is a Latin symbol which is fairly common to many European languages, however not all languages use the Latin alphabet. In order to cater for every language worldwide we'd need to encode more than 110,000 symbols which would require at least 17 bits per character.

Prior to multi-language support, most information was transmitted in English. To cater for this we needed to encode 26 symbols for the upper case alphabet, 26 for the lower case alphabet, 10 digits, a handful of common punctuation marks such as periods, commas, parenthesis, and so on, plus some common symbols such as %, & and @. Transmitting information to a printer, screen or some other device also required some non-printing control characters, such as carriage return, line feed whitespace, transmission begin/end and so on. Thus the American Standard Code for Information Interchange (ASCII) decided that 128 characters was sufficient to encode the entire Latin alphabet plus control codes using just 7 bits and all systems were standardised to accommodate this encoding. Although most systems today use an 8 bit byte, many older printers and other transmission protocols used just 7 bits to maintain the highest possible rate of throughput. Some even used specialised encodings with fewer bits (and fewer symbols) to speed up transfers even further. Each encoding therefore required its own standard, many of which were defined by ASCII.

To cater for more specialised symbols and to provide support for some foreign languages, an 8 bit extended character set was used, yielding an additional 128 symbols. The first 127 characters in every ASCII code page are always the same, but the extended character set could be switched simply by changing the code page. However, only one code page can be in effect at any one time, so systems were not only limited to 256 characters total, they had to use the same code page to ensure extended character information was correctly decoded.

Today, when we speak of ASCII, we are generally speaking about the ISO/IEC 8859 standard (or code page 8859). The majority of programming languages utilise this standard to define the language's symbols, thus making it possible to transmit the same source code between machines.

UNICODE addresses the limitations of 8-bit ASCII by using more bits per character. A key aspect of UNICODE is that the first 128 characters must always match the 7 bit standard ASCII encodings, regardless of how many bits are employed in the actual encoding. While it would be relatively simple to encode every symbol used by every language using just 17 bits, this limits the ability to expand the number of characters beyond 131,072. More importantly, it is helpful to space the symbols out such that the most-significant bits in the encoding can be used to more easily identify a particular set of symbols. Thus UNICODE uses 32-bits per character, with individual character sets (or code pages) spread throughout the range.

This immediately puts an overhead upon English-based text transmissions because we'd have to transmit four times as many bits as we would with the ASCII equivalent. To get around this, UNICODE introduced variable-width encodings, such that the first 128 characters were encoded using 8 bits, exactly mirroring ASCII when the most significant bit is 0. If the most significant bit were 1, however, this would indicate that the symbol was encoded using anything from 2 to 6 bytes, depending on the state of other high-order bits. Each of these multi-byte encodings is then mapped to a 32-bit UNICODE character.

UTF8 is the most common form of UNICODE in use today because it has no overhead compared to 8 bit standard ASCII and, for most transmissions, has less overhead than 32-bit UNICODE (also known as UTF32). UTF16 uses 16 bits throughout but doesn't cover the complete range of UNICODE encodings.