I believe you asked for the relationship between "statistical significance" and hypothesis testing. In hypothesis testing, we state the null and alternative hypothesis, t

…hen in the traditional method, we use a test statistic and a significance level, alpha, to decide whether to accept or reject the null hypothesis in favor of the alternative. If our test statistic falls in the reject area (critical region) of the sampling distribution, then we reject the null hypothesis. If not, we accept it. There is the second method, the p-value method, which is similar in that an alpha value has to be selected. Now, the term "statistical significant result", as used in statistics, means a result (mean value, proportion or variance) from a random sample was not likely to be produced by chance. When we reject the null hypothesis in favor of the alternative, we indicate our data supports an alternative hypothesis, so our result is "statistically significant." Let me use an example. Generally workers arrive at work a few minutes more or less than required. Our null hypothesis will be an average lateness of 5 minutes, and our alternative hypothesis will be greater than 5 minutes. Our data shows an average lateness of 12 minutes, and our test statistic, taking into account the variance and sample size, and our chosen alpha level, concludes that we reject the null hypothesis, so the 12 minute average is a significantly significant result because it supported rejection of the hypothesis. The problem is that significant, in common usage, means important or meaningful, not trivial or spurious. The sample used to calculate late time may have been not randomly chosen, more people come to work late in bad weather. The sample is to make inferences on the a general population, but there is no static population in this case, as a company hires and fires employees. So, since our data is flawed, so can our conclusions. Used as a technical term in statistics, statistical significance has a much more rigorous and restricted meaning, which can lead to confusion. See: http://en.wikipedia.org/wiki/Statistical_significance (MORE)