answersLogoWhite

0

Historically, supercomputers worked sheerly by having a more powerful processor than other types of computers. However, due to limitations inherent in processor design, they now work mainly through multiprocessing. When they are given a task, they split the task up and run it on several different processors at the same time. With multiple processors working on the task, it takes much less time to finish it. When the supercomputer needs more power, the engineers simply add more processors (often called "nodes").

Supercomputers are generally used to do simulations. The problem with computers is that they are great at discrete simulations (what happens at point x at time y given inputs z?), but they tend to suck at analogue simulations (what will happen in this area over a time period?).

You can approximate analogue simulations by doing discrete/digital simulations over thousands or millions of points of space and time. The more points you can calculate in a simulated hurricane or nuclear explosion, the closer you get to the real thing. Hence bigger and badder supercomputers can do simulations that are closer and closer to true analogue representations.

As chrismear says, the stewardship program are probably running nuclear explosion simulations with varying inputs based around the age of the stockpile and the half-lives of the component compounds in the warheads.

Rest assured that the 'safe and reliable' thing is mostly based around "will they still do as much damage as reliably as they did when we tested the real thing?". 'Safety' is not really an issue when the warheads need external detonation to get them anywhere near critical mass.

User Avatar

Wiki User

16y ago

What else can I help you with?