Unfortunately, for larger values of coefficient *b*, the standard error and the Wald statistic become inflated, which increases the probability that *b* is viewed as not making a significant contribution to the model even when it does (i.e. a type II error).

To overcome this problem it is better to test on the basis of the log-likelihood statistic since

where *df = k – k _{0}*

_{ }and where

*LL*

_{1}

*refers to the full log-likelihood model and*

*LL*

_{0}

*refers to a model with fewer coefficients (especially the model with only the intercept*

*b*

_{0}and no other coefficients). This is equivalent to

**Observation**: For ordinary regression the coefficient of determination

Thus *R*^{2} measures the percentage of variance explained by the regression model. We need a similar statistic for logistic regression. We define the following three pseudo-*R*^{2} statistics for logistic regression.

**Definition 1**: **The log-linear ratio R^{2}** (aka

**McFadden’s**) is defined as follows:

*R*^{2}where *LL*_{1} refers to the full log-likelihood model and *LL*_{0} refers to a model with fewer coefficients (especially the model with only the intercept *b*_{0}* *and no other coefficients).

**Cox and Snell’s R^{2}** is defined as

where *n* = the sample size.

**Nagelkerke’s R^{2}** is defined as

**Observation**: Since cannot achieve a value of 1, Nagelkerke’s *R*^{2} was developed to have properties more similar to the *R*^{2} statistic used in ordinary regression.

**Observation**: The initial value *L*_{0}* *of *L*, i.e. where we only include the intercept value *b _{0}*, is given by

where *n*_{0} = number of observations with value 0, *n*_{1} = number of observations with value 1 and *n = n*_{0}* + n*_{1}.

As described above, the likelihood-ratio test statistic equals:

where *L*_{1}* *is the maximized value of the likelihood function for the full model *L*_{1}, while *L*_{0} is the maximized value of the likelihood function for the reduced model. The test statistic has chi-square distribution with *df = k*_{1}* – k*_{0}, i.e. the number of parameters in the full model minus the number of parameters in the reduced model.

**Example 1**: Determine whether there is a significant difference in survival rate between the different values of rem in Example 1 of Basic Concepts of Logistic Regression. Also calculate the various pseudo-*R*^{2} statistics.

We are essentially comparing the logistic regression model with coefficient *b* to that of the model without coefficient *b*. We begin by calculating the *L _{1}* (the full model with

*b*) and

*L*(the reduced model without

_{0}*b*).

Here *L _{1}* is found in cell M16 or T6 of Figure 6 of Finding Logistic Coefficients using Solver.

We now use the following test:

where *df* = 1. Since p-value = CHIDIST(280.246,1) = 6.7E-63 < .05 = *α*, we conclude that differences in rems yield a significant difference in survival.

The pseudo-*R*^{2} statistics are as follows:

All these values are reported by the Logistic Regression data analysis tool (see range S5:T16 of Figure 6 of Finding Logistic Coefficients using Solver).

Given your figure 6 output are the following statements a correct interpretation?

The results of the likelihood ratio test suggest there was statistically significant relationship between the input variable and the outcome variable at the 0.05 level of significance (chi sq (1, N=760)= 280.2421, p=6.65E-63).

The odds ratio of the input was .9928(=exp(-0.00722)) with a 95% confidence interval=(.9917,9939). This indicated that every every unit …. increased/decrease in the input variable the odds of the output variable increased/decreased by 0.9928

My understanding of your data set is weak so I’m not sure how to interpret that.

My data is pretest score and output is pass/fail class. The logisitic regression ran nicely and my model is significant.

Amy,

Yes, this seems correct.

Charles

Hi Charles,

Is there any post where the Binary logistic regression output has been interpreted. As in what does the output mean and what conclusion actions can be derived from the same.

Shri

Hi Charles,

The R-squared in linear regression is defined like so:

var(Y) = (var(Y)-var(err))/var(Y) = 1 – var(err)/var(Y)

where var(err) is derived from the absolute difference between Y and Yhat.

Why can’t we apply this definition to logistic regression where Y is the observed probability and Yhat is the estimated probability?

Wytek,

Sorry, but I have not tried to evaluate this version of R-square for logistic regression. From what I can see no one uses it. Instead they use pseudo-R-square statistics, some of which are described on my website.

Charles

Great site – very helpful.

One typo:

CHITEST(280.246,1) = 6.7E-63 => CHIDIST(280.246,1) = 6.7E-63

Mike,

Thanks for catching this. I have now made the correction.

Charles