An effect is the size of the variance explained by a statistical model. This is as opposed to the error, which is the size of the variance not explained by the model.
The effect size is a standardized measure of the magnitude of an effect. Since it is standardized we can compare the effects across different studies with different variables and different scales. For example, differences in the means between two groups can be expressed in terms of the standard deviation. Specifically, an effect size of 0.5 signifies that the difference between the means is half of the standard deviation.
The most common measures of effect size are Cohen’s d (as described in the previous paragraph and in Standardized Effect Size), Pearson’s correlation coefficient r (as described in One Sample Hypothesis Testing of Correlation) and the odds ratio (as described in Effect Size for Chi-square), although other measures are also used.
It should be noted that with very large samples, even a small value of the test statistic can result in the null hypothesis being rejected. Although such an effect may be “significant”, it may not be very “large”. The effect size has the advantage of not depending on the sample size, and so can provide a standard measure of whether the size of an effect is “important”.