A blastwave is the destructive wave of pressure caused by an explosion.
The first atomic bomb was launched in 1945, by the US. It incinerated Hiroshima and Nagasaki, Japan during WWII The above is half true. That was the first time an atomic bomb was used in war, but the first atomic bomb was detonated at the the Trinity Site in White Sands Missile Range, on July 16, 1945.
It was actually the lower yield MK-I Little boy bomb dropped on Hiroshima that did more damage.The higher yield MK-III Fatman bomb was accidentally dropped much further from its target (almost 2 miles away) and the blast wave was contained by the surrounding hills.
Not at this time, the yield of hydrogen bombs (a type of nuclear bomb) has no theoretical limit. However usable bombs no matter what type they are have practical limits and we have already built and tested successfully bombs with yields far higher than can be militarily practical (nobody ever had a real military use for the 50 megaton bomb the USSR tested in 1961 called the Tsar Bomba!). Edward Teller once proposed building gigaton range hydrogen bombs, but the plan was promptly rejected as the vast majority of the blastwave of such high yield explosions would only blow the atmosphere above the point of detonation off into space (the military wants surface damage not removal of atmosphere!) producing less surface damage than lower yield less expensive bombs. The trend since the middle 1970s has actually been lower and lower yield hydrogen bombs that when employed as several explosions spaced across an area produce greater damage more economically than one higher yield yield bomb could. ========== In terms of long term damage there are different types of nuclear bombs. Those which spread large quantities of persistent high-level radiation materials could be argued as "worst" - but they are still considered nuclear weapons. A "dirty bomb" which simply uses a conventional explosive to spread radioactive materials over a wide area to contaminate it could possibly be considered equally bad.
The original invention of the hydrogen bomb was very early in the Manhattan Project, but no significant work on designing one was done until right after the end of World War 2, when Edward Teller completed his "Classical Super" design (simulation of this design on ENIAC showed by the end of January 1946 that this design would not work). The invention of what became the Teller-Ulam staged radiation implosion hydrogen bomb came in 1950 from work on improved atomic bomb designs. A design team at Los Alamos came up with the idea of using one atomic bomb to implode another much faster than chemical explosives can, resulting in a device having higher yield while using less fissionable material. One member of this team, the mathematician Stanislaw Ulam decided to consult with Edward Teller on issues of x-ray radiation transport, etc. and how to perform the necessary mathematical analyses to model this on a computer. However Edward Teller quickly realized this was the missing "ingredient" needed to make his hydrogen bomb designs practical and while supporting continuing work on the idea with Stanislaw Ulam he discouraged using the concept for improving atomic bombs with this idea as being "inefficient" compared to making hydrogen bombs. The US built and tested its first hydrogen bomb using staged radiation implosion in 1952 (designed by Richard Garwin, with suggestions by Edward Teller) in shot Ivy Mike a massive 82 ton cryogenic assembly requiring a separate liquid hydrogen plant to keep its deuterium-tritium fusion fuel mixture cold. This was obviously not a practical bomb for an aircraft to deliver. The USSR built and tested a limited yield, but deliverable weapon they called a "type of hydrogen bomb" in 1953, causing a brief panic in the US. This device did not use a staged radiation implosion, instead its core consisted of alternating layers of enriched uranium and lithium deuteride, making it more of a "dry boosted fission bomb" than a hydrogen bomb. The US built and tested several different deliverable hydrogen bombs using staged radiation implosion in 1954 in operation Castle. The device tested in shot Castle Romeo was selected to become the first US hydrogen bomb and was fielded later that year as the EC-17 and could only be carried by the B-36 bomber. "EC" meant Emergency Capability: any bomber delivering one was on a suicide mission as there was no way the airplane could avoid being destroyed by the blastwave from the explosion. In 1955 the US added a retarding parachute to this bomb which gave time for the bomber to escape undamaged, and changed its name to the MK-17. At about the same time the USSR tested and fielded their first deliverable staged radiation implosion hydrogen bomb.
They were not targeted specifically. An initial list of 6 target cities was prepared in May 1945, they were just 2 of the cities on this list. Along with this list orders were prepared to "use the atomic bombs as they become available". No other authorization to use the atomic bombs was required, they would just continue being used until the president canceled these orders.When the 509th Composite Group arrived on Tinian in July 1945, they did periodic practice bombing missions on various Japanese cities not included on the list using 10000 pound "pumpkin bombs" filled with Composition B and fused to explode on impact. Each of these missions used 3 B-29s just like the real atomic bombing missions would: one plane carrying the bomb and two observation planes.For the real missions the field commanders of the 509th selected 3 cities from the list: a primary target, a secondary target, and a tertiary target. The real atomic bombs were RADAR altimeter fused to explode at an altitude calculated to maximize the area of blastwave damage.The Little Boy MK-I atomic bomb became available (assembled and checked out) on the morning of August 6, 1945 and the 3 planes took off from Tinian. This bomb was successfully used on its primary target Hiroshima.The Fatman MK-III atomic bomb became available (assembled and checked out) on the morning of August 9, 1945 and the 3 planes took off from Tinian. Due to problems one of the observation planes did not meet up with the plane carrying the bomb and the other observation plane. Two planes continued on to the primary target Kokura, but after finding it obscured by smoke from a nearby city that had been firebombed the night before it was decided to proceed to the secondary target. This bomb was successfully used on its secondary target Nagasaki despite heavy cloud cover, but the bomber was very low on fuel by that time and could not have made it to the tertiary target if they had not been able to use their bomb on Nagasaki.Another MK-III atomic bomb had been readied by Los Alamos by this time and shipped to San Francisco from which it would have been flown to Tinian and become available (assembled and checked out) sometime in late August 1945. It would have very likely been used on Kokura (but actual final selection of the 3 cities would still have been made by the field commanders). However before it arrived in San Francisco the Japanese had indicated they would surrender and Truman ordered the bombing of Japan stopped, so it was returned to Los Alamos (probably becoming the first atomic bomb in the US stockpile).The Manhattan Project was prepared to manufacture and "make available" 20 more MK-III atomic bombs for use on Japan before the end of 1945.
The age of rocks is usually determined by radioactive (or radiometric) dating. Some elements are radioactive and gradually convert from one isotope to another. For example, uranium 238 (238U) will gradually convert to lead (206Pb). It will do this at a constant rate. The rate is described as a half-life of the isotope in question. This is the period of time it takes for half the atoms of 238U to convert to lead. For this particular element, the half-life is 4.47 billion years and uranium/lead dating is useful for rocks between 1 million and 4.5 billion years (as luck would have it!). Other elements are also used (potassium/argon for example). The choice of element depends on how widespread it is - if it isn't found in many rocks then it's not very useful. And how long the half-life is - if the half life is shorter than the age of most rocks then its equally unuseful. Radiometric dating can only be performed on igneous rocks. The uranium/lead elements are most often used because igneous rocks often contain uranium and the half life is so long. The oldest rock found is around 4.54 billion years old. This is consistent with the age of rocks from the moon and from meteorites, which all point to an age around 4.54-4.56 billion years. Separate studies of the sun's mass and luminosity suggest that the solar system itself can't be much older than these rocks. All of this is well-known scientific information. The arguments against it lack scientific credibility, and most of them are posted by so-called Young Earth Christians. The vast majority of the scientific community accepts as valid the information provided by radiometric dating. The claims made about "assumptions" in radiometric dating of billion-year old rock are actually stepping stones that those who work in the field have worked hard to eliminate as impediments to the accuracy of their measurements. Making a "claim" about any aspect of radiometric dating without scientific proof that something is wrong, as has been done, does not invalidate the work - or the mearurements. The earth is about four and a half billion years old.