Historically, supercomputers worked sheerly by having a more powerful processor than other types of computers. However, due to limitations inherent in processor design, they now work mainly through multiprocessing. When they are given a task, they split the task up and run it on several different processors at the same time. With multiple processors working on the task, it takes much less time to finish it. When the supercomputer needs more power, the engineers simply add more processors (often called "nodes").
Supercomputers are generally used to do simulations. The problem with computers is that they are great at discrete simulations (what happens at point x at time y given inputs z?), but they tend to suck at analogue simulations (what will happen in this area over a time period?).
You can approximate analogue simulations by doing discrete/digital simulations over thousands or millions of points of space and time. The more points you can calculate in a simulated hurricane or nuclear explosion, the closer you get to the real thing. Hence bigger and badder supercomputers can do simulations that are closer and closer to true analogue representations.
As chrismear says, the stewardship program are probably running nuclear explosion simulations with varying inputs based around the age of the stockpile and the half-lives of the component compounds in the warheads.
Rest assured that the 'safe and reliable' thing is mostly based around "will they still do as much damage as reliably as they did when we tested the real thing?". 'Safety' is not really an issue when the warheads need external detonation to get them anywhere near critical mass.
With skill ;)
If you have to ask, you can't have one.
Seriously, super computers are often aggregates of several or many computers. There is one "boss" computer whose task is to oversee the churnings of the others while they grind away at their part of the problem.
A cool temperature is needed.
muuahahahahaah xd
i think latest supercomputer is "road runner".
The MacBook is a great computer but it would not be officially classed as a SuperComputer.
San Diego Supercomputer Center was created in 1985.
Given similar technology the supercomputer is faster, by definition.
Supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second)
personal computer, hand held computers,work station, Mid range station, main frame, and supercomputer
bangalore
No - a supercomputer is a single device or system (although fast and expensive). A massive collection of networked computers can give the results of a supercomputer but they would not be considered one.
four
engineers