Central to statistical analysis is the notion of hypothesis testing. In this section we review hypothesis testing (via null and alternative hypotheses), as well as considering the related topics of confidence intervals, effect size, statistical power and sample size requirements.

Topics:

- Null and Alternative Hypothesis
- Confidence Interval
- Effect Size
- Statistical Power
- Real Statistics Power and Sample Size Data Analysis Tool

How do you determine if a null hypothesis is implied or explicilty

Jackie,

This varies on a case by case basis.

Charles

Dear Charles

What we really get through NHST?

Logic tells us that to achieve a conclusion since a sufficient set of precondition is fulfilled then the conclusion is unequivocally proved. On contrary necessary conditions must evidently to occur but nor withstand are unable, by itself, to prove the conclusion we want to check.

Unfortunately, NHST´s, are such that a no-significant result does not means, at all, that the null hypothesis is true, by other words, it must occur as a precondition, but is insufficient to state the truthfulness of the Null. Therefore, following Neyman – Pearson Theory, if we want to achieve an indisputable choice – H0 or Ha, Null or Alternative Hypothesis – we must impose a rather odd condition such that there not exists a third alternative.

Luis

Luis,

What you get via the NHST approach is the probability that the observed data could occur given that the null hypothesis is true (i.e. a conditional probability). This approach is a bit disappointing for those of us who would simply prefer to know the probability that the null hypothesis is true. In any case, the NHST approach is the one that is commonly used (although the Baysians look at things a little differently).

Charles.

Charles

Yes, of course, you are right. My fragmentary note was only to stress the evidence of the NHST “necessarily” character. I bet that are yet people persuaded that a “no-significant” result implies the Null Acceptance. I had heard “barbaric” conclusions about these issues, like, for example:

____The significance tests are a completely foolish exercise because the Nullity never occurs. That is, we pose H0: p=p0 and we know that this condition is impossible because, likely, the Parameter Population will differ from proposed p0 at least from one decimal place.

My thought:

One cannot think H0: p=p0 as an algebraic equality. My (humble) interpretation is something like this: We try to obtain an evidence that “ Ha: p not equal to p0” is strongly unlike. Performing the test (supposing the Null True) if they fall outside the “rejection interval” we conclude that there is not sufficient evidence to reject H0. Simply, we are not allowed, at all, to state H0 true. In fact to the no rejection we deserve a high probability, usually 95%, therefore, we, deliberately, abandon the intention to say anything about the Null truthfulness.

And so on. I think that a great amount of the Real Statistics readers are well aware that one is dealing with likeliness, never with certainty: NHST is a true game.

Charles, I beg your pardon for the babbling,

Luis

Charles

It shouldn’t be taken literally the Parametric Null Hypothesis sign H0: p=0 (or whatever value). In fact accomplishing the calculations we admit that it is true. . . However in case the not significant result we cannot go beyond the statement that there is no sufficient evidence to reject the Null Hypothesis: to say that we accept is structurally an error, even worse to state the Null is true.

On contrary we fall into the catastrophic conclusion ( J. Cohen *) that the Null Hypotheses is impossible to be found because even a last decimal could be different from the assumed value.

A silly argument to try to invalidate the NHST, say.

Luis

Charles

Because the researcher wants not publish trash results, to warrant that his find is clearly different from the current null hypothesis, H0, yet found in literature, he, wisely, does choose a very small alpha (5%, even 1%) in order that the type I error be appropriate.

Although the usual jargon “not significant result” hardly can be synonymous of a Null Hypothesis acceptance. It is much better, IMO, to say that “there is no sufficient evidence” to reject H0. Accordingly, would be preferable “no-rejection interval”, instead of the current “acceptance interval”.

Are you, Charles, so kind to comment?

Luis

Luis,

I agree, but it is probably better for the readers to become familiar with the commonly-used terminology.

Charles