Analysis of Discrete Data #6: Logistic Regression

Posted on September 27, 2014 by

3


(Penn State STAT 504) — Thus far our focus has been on describing interactions or associations between two or three categorical variables mostly via single summary statistics and with significance testing. From this lesson on, we will focus on modeling. Models can handle more complicated situations and analyze the simultaneous effects of multiple variables, including mixtures of categorical and continuous variables. In Lesson 6 and Lesson 7 [1], we study the binary logistic regression, which we will see is an example of a generalized linear model [2].

Binary Logistic Regression is a special type of regression where binary response variable is related to a set of explanatory variables, which can be discrete and/or continuous. The important point here to note is that in linear regression, the expected values of the response variable are modeled based on combination of values taken by the predictors. In logistic regression Probability or Odds of the response taking a particular value is modeled based on combination of values taken by the predictors. Like regression (and unlike log-linear models [3] that we will see later), we make an explicit distinction between a response variable and one or more predictor (explanatory) variables. We begin with two-way tables, then progress to three-way tables, where all explanatory variables are categorical. Then we introduce binary logistic regression with continuous predictors as well. In the last part we will focus on more model diagnostics and model selection.

Logistic regression is applicable, for example, if:

  • we want to model the probabilities of a response variable as a function of some explanatory variables, e.g. “success” of admission as a function of gender.
  • we want to perform descriptive discriminate analyses such as describing the differences between individuals in separate groups as a function of explanatory variables, e.g. student admitted and rejected as a function of gender
  • we want to predict probabilities that individuals fall into two categories of the binary response as a function of some explanatory variables, e.g. what is the probability that a student is admitted given she is a female
  • we want to classify individuals into two categories based on explanatory variables, e.g. classify new students into “admitted” or “rejected” group depending on their gender.

Key Concepts

  • Generalized Linear Models
  • Binary Logistic Regression for 2 × 2 and 2 × J tables
  • Binary Logistic Regression for 2 × I × J tables and k-way tables
  • Model Fit and Parameter Estimation & Interpretation
  • Link to test of independence and log-linear model of independence
  • Multiple Logistic Regression with categorical explanatory variables
  • Multiple Binary Logistic Regression with a combination of categorical and continuous predictors
  • Model diagnostics

Objectives

  • Understand the basic ideas behind modeling categorical data with binary logistic regression.
  • Understand how to fit the model and interpret the parameter estimates, especially in terms of odds and odd ratios.
  • Understand the basic ideas behind modeling binary response as a function of two or more categorical explanatory variables.
  • Understand the basic ideas behind modeling binary response as a function of continuous and categorical explanatory variables.

Useful Links

Readings

  • Agresti (2007) Ch 3, Sec 3.1-3.2, Ch. 4, Ch. 5
  • Agresti (2013) Ch 4, Sec 4.1-4.2, Ch. 5, Ch. 6; more advanced Ch 4, Sec 4.4-4.7, and Ch. 7

To complete this lesson you should :

  • Read the online course material
  • Read suggested textbook pages
  • Complete the Discussion questions and exercises placed throughout the online Lesson 6 material

6.1 – Introduction to Generalized Linear Models

Thus far our focus has been on describing interactions or associations between two or three categorical variables mostly via single summary statistics and with significance testing. Models can handle more complicated situations and analyze the simultaneous effects of multiple variables, including mixtures of categorical and continuous variables. For example, the Breslow-Day statistics only works for 2 × 2 × K tables, while log-linear models will allow us to test of homogeneous associations in I × J × K and higher-dimensional tables. We will focus on a special class of models known as the generalized linear models (GLIMs or GLMs in Agresti).

The structural form of the model describes the patterns of interactions and associations. The model parameters provide measures of strength of associations. In models, the focus is on estimating the model parameters. The basic inference tools (e.g., point estimation, hypothesis testing, and confidence intervals) will be applied to these parameters. When discussing models, we will keep in mind:

  • Objective
  • Model structure (e.g. variables, formula, equation)
  • Model assumptions
  • Parameter estimates and interpretation
  • Model fit (e.g. goodness-of-fit tests and statistics)
  • Model selection

For example, recall a simple linear regression model

  • Objective: model the expected value of a continuous variable, Y, as a linear function of the continuous predictor, X, E(Yi) = β0 + β1xi
  • Model structure: Yi = β0 + β1xi + ei
  • Model assumptions: Y is is normally distributed, errors are normally distributed, eiN(0, σ2), and independent, and X is fixed, and constant variance σ2.
  • Parameter estimates and interpretation: β^0 is estimate of β0 or the intercept, and β^1 is estimate of the slope, etc… Do you recall, what is the interpretation of the intercept and the slope?
  • Model fit: R 2, residual analysis, F-statistic
  • Model selection: From a plethora of possible predictors, which variables to include?

For a review, if you wish, see a handout labeled LinRegExample.doc [8] on modeling average water usage given the amount of bread production, e.g., estimated water production is positively related to the bread production:

Water = 2273 + 0.0799 Production

Generalized Linear Models (GLMs)

First, let’s clear up some potential misunderstandings about terminology.  The term general linear model (GLM) usually refers to conventional linear regression models for a continuous response variable given continuous and/or categorical predictors. It includes multiple linear regression, as well as ANOVA and ANCOVA (with fixed effects only). The form is yiN(xTiβ,σ2), where xi contains known covariates and β contains the coefficients to be estimated. These models are fit by least squares and weighted least squares using, for example: SAS Proc GLM or R functions lsfit() (older, uses matrices) and lm() (newer, uses data frames).

The term generalized linear model (GLIM or GLM) refers to a larger class of models popularized by McCullagh and Nelder (1982, 2nd edition 1989). In these models, the response variable yi is assumed to follow an exponential family distribution with mean μi, which is assumed to be some (often nonlinear) function of xTiβ. Some would call these “nonlinear” because  μi is often a nonlinear function of the covariates, but McCullagh and Nelder consider them to be linear, because the covariates affect the distribution of yi only through the linear combination xTiβ.   The first widely used software package for fitting these models was called GLIM. Because of this program, “GLIM” became a well-accepted abbreviation for generalized linear models, as opposed to “GLM” which often is used for general linear models. Today, GLIM’s are fit by many packages, including SAS Proc Genmod and R function glm(). Notice, however, that Agresti uses GLM instead of GLIM short-hand, and we will use GLM.

The generalized linear models (GLMs) are a broad class of models that include linear regression, ANOVA, Poisson regression, log-linear models etc. The table below provides a good summary of GLMs following Agresti (ch. 4, 2013):

Model Random Link Systematic
Linear Regression Normal Identity Continuous
ANOVA Normal Identity Categorical
ANCOVA Normal Identity Mixed
Logistic Regression Binomial Logit Mixed
Loglinear Poisson Log Categorical
Poisson Regression Poisson Log Mixed
Multinomial response Multinomial Generalized Logit Mixed

There are three components to any GLM:

  • Random Component – refers to the probability distribution of the response variable (Y); e.g. normal distribution for Y in the linear regression, or binomial distribution for Y in the binary logistic regression.  Also called a noise model or error model.  How is random error added to the prediction that comes out of the link function?
  • Systematic Component – specifies the explanatory variables (X1, X2, … Xk) in the model, more specifically their linear combination in creating the so called linear predictor; e.g., β0 + β1x1 + β2x2 as we have seen in a linear regression, or as we will see in a logistic regression in this lesson.
  • Link Function, η or g(μ) – specifies the link between random and systematic components. It says how the expected value of the response relates to the linear predictor of explanatory variables; e.g., η = g(E(Yi)) = E(Yi) for linear regression, or  η = logit(π) for logistic regression.

Assumptions:

  • The data Y1, Y2, …, Yn are independently distributed, i.e., cases are independent.
  • The dependent variable Yi does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,…)
  • GLM does NOT assume a linear relationship between the dependent variable and the independent variables, but it does assume linear relationship between the transformed response in terms of the link function and the explanatory variables; e.g., for binary logistic regression logit(π) = β0 + βX.
  • Independent (explanatory) variables can be even the power terms or some other nonlinear transformations of the original independent variables.
  • The homogeneity of variance does NOT need to be satisfied. In fact, it is not even possible in many cases given the model structure, and overdispersion (when the observed variance is larger than what the model assumes) maybe present.
  • Errors need to be independent but NOT normally distributed.
  • It uses maximum likelihood estimation (MLE) rather than ordinary least squares (OLS) to estimate the parameters, and thus relies on large-sample approximations.
  • Goodness-of-fit measures rely on sufficiently large samples, where a heuristic rule is that not more than 20% of the expected cells counts are less than 5.

For a more detailed discussion refer to Agresti(2007), Ch. 3, Agresti (2013), Ch.4, and/or McCullagh & Nelder (1989).

Following are examples of GLM components for models that we are already familiar, such as linear regression, and for some of the models that we will cover in this class, such as logistic regression and log-linear models.

Simple Linear Regression models how mean expected value of a continuous response variable depends on a set of explanatory variables, where index i stands for each data point:

Y_i=\beta_0+\beta x_i+ \epsilon_i

or

E(Y_i)=\beta_0+\beta x_i

  • Random component: Y is a response variable and has a normal distribution, and generally we assume errors, ei ~ N(0, σ2).
  • Systematic component: X is the explanatory variable (can be continuous or discrete) and is linear in the parameters β0  + βxi . Notice that with a multiple linear regression where we have more than one explanatory variable, e.g., (X1, X2, … Xk), we would have a linear combination of these Xs in terms of regression parameters β’s, but the explanatory variables themselves could be transformed, e.g., X2, or log(X).
  • Link function: Identity Link, η = g(E(Yi)) = E(Yi) — identity because we are modeling the mean directly; this is the simplest link function.

Binary Logistic Regression models how binary response variable Y depends on a set of k explanatory variables, X=(X1, X2, … Xk).

\text{logit}(\pi)=\text{log} \left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta x_i+\ldots+\beta_0+\beta x_{k'}

which models the log odds of probability of “success” as a function of explanatory variables.

  • Random component: The distribution of Y is assumed to be Binomial(n,π), where π is a probability of “success”. 
  • Systematic component: X’s are explanatory variables (can be continuous, discrete, or both) and are linear in the parameters, e.g.,  β0 + βxi + … + β0 + βxk. Again, transformation of the X’s themselves are allowed like in linear regression; this holds for any GLM.
  • Link function: Logit link:

\eta=\text{logit}(\pi)=\text{log} \left(\dfrac{\pi}{1-\pi}\right)

More generally, the logit link models the log odds of the mean, and the mean here is π. Binary logistic regression models are also known as logit models when the predictors are all categorical.

Log-linear Model models the expected cell counts as a function of levels of categorical variables, e.g., for a two-way table the saturated model

\text{log}(\mu_{ij})=\lambda+\lambda^A_i+\lambda^B_j+\lambda^{AB}_{ij}

where μij=E(nij) as before are expected cell counts (mean in each cell of the two-way table), A and B represent two categorical variables, and λij‘s are model parameters, and we are modeling the natural log of the expected counts.

  • Random component: The distribution of counts, which are the responses, is Poisson
  • Systematic component: X’s are discrete variables used in cross-classification, and are linear in the parameters \lambda+\lambda^{X_1}_i+\lambda^{X_2}_j+\ldots+\lambda^{X_k}_k+\ldots
  • Link Function: Log link, η = log(μ) — log because we are modeling the log of the cell means.

The log-linear models are more general than logit models, and some logit models are equivalent to certain log-linear models [3]. Log-linear model is also equivalent to Poisson regression model when all explanatory variables are discrete. For additional details see Agresti(2007), Sec. 3.3, Agresti (2013), Section 4.3 (for counts), Section 9.2 (for rates), and Section 13.2 (for random effects).

Summary of advantages of GLMs over traditional (OLS) regression

  • We do not need to transform the response Y to have a normal distribution
  • The choice of link is separate from the choice of random component thus we have more flexibility in modeling
  • If the link produces additive effects, then we do not need constant variance.
  • The models are fitted via Maximum Likelihood estimation; thus optimal properties of the estimators.
  • All the inference tools and model checking that we will discuss for log-linear and logistic regression models apply for other GLMs too; e.g., Wald and Likelihood ratio tests, Deviance, Residuals, Confidence intervals, Overdispersion.
  • There is often one procedure in a software package to capture all the models listed above, e.g. PROC GENMOD in SAS or glm() in R, etc… with options to vary the three components.

But there are some limitations of GLMs too, such as,

  • Linear function, e.g. can have only a linear predictor in the systematic component
  • Responses must be independent

There are ways around these restrictions; e.g., consider analysis for matched data, or use NLMIXED [8] in SAS, or {nlme} [9] package in R, or consider other models, other software packages.

6.2 – Binary Logistic Regression with a Single Categorical Predictor

Key Concepts

  • Binary Logistic Regression for 2 × J tables
  • Model Fit and Parameter Estimation & Interpretation
  • Link to test of independence

Objectives

  • Understand the basic ideas behind modeling categorical data with binary logistic regression.
  • Understand how to fit the model and interpret the parameter estimates, especially in terms of odds and odd ratios.

Overview

Binary logistic regression estimates the probability that a characteristic is present (e.g. estimate probability of “success”) given the values of explanatory variables, in this case a single categorical variable ; π = Pr (Y = 1|X = x). Suppose a physician is interested in estimating the proportion of diabetic persons in a population. Naturally she knows that all sections of the population do not have equal probability of ‘success’, i.e. being diabetic. Older population, population with hypertension, individuals with diabetes incidence in family are more likely to have diabetes. Consider the predictor variable X to be any of the risk factor that might contribute to the disease. Probability of success will depend on levels of the risk factor.

Variables:

    • Let Y be a binary response variable

Yi = 1 if the trait is present in observation (person, unit, etc…) i
Yi = 0 if the trait is NOT present in observation i

  • X = (X1, X2, …, Xk) be a set of explanatory variables which can be discrete, continuous, or a combination. xi is the observed value of the explanatory variables for observation i. In this section of the notes, we focus on a single variable X.

Model:

\pi_i=Pr(Y_i=1|X_i=x_i)=\dfrac{\text{exp}(\beta_0+\beta_1 x_i)}{1+\text{exp}(\beta_0+\beta_1 x_i)}

or,

2014-09-27_logit

Assumptions:

  • The data Y1, Y2, …, Yn are independently distributed, i.e., cases are independent.
  • Distribution of Yi is Bin(ni, πi), i.e., binary logistic regression model assumes binomial distribution of the response. The dependent variable does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,…)
  • Does NOT assume a linear relationship between the dependent variable and the independent variables, but it does assume linear relationship between the logit of the response and the explanatory variables; logit(π) = β0 + βX.
  • Independent (explanatory) variables can be even the power terms or some other nonlinear transformations of the original independent variables.
  • The homogeneity of variance does NOT need to be satisfied. In fact, it is not even possible in many cases given the model structure.
  • Errors need to be independent but NOT normally distributed.
  • It uses maximum likelihood estimation (MLE) rather than ordinary least squares (OLS) to estimate the parameters, and thus relies on large-sample approximations.
  • Goodness-of-fit measures rely on sufficiently large samples, where a heuristic rule is that not more than 20% of the expected cells counts are less than 5.

Model Fit:

  • Overall goodness-of-fit statistics of the model; we will consider:
    1. Pearson chi-square statistic, X2
    2. Deviance, G2 and Likelihood ratio test and statistic, ΔG2
    3. Hosmer-Lemeshow test and statistic
  • Residual analysis: Pearson, deviance, adjusted residuals, etc…
  • Overdispersion

Parameter Estimation:

The maximum likelihood estimator (MLE) for (β0, β1) is obtained by finding (\hat{\beta}_0,\hat{\beta}_1) that maximizes:

L(\beta_0,\beta_1)=\prod\limits_{i=1}^N \pi_i^{y_i}(1-\pi_i)^{n_i-y_i}=\prod\limits_{i=1}^N \dfrac{\text{exp}\{y_i(\beta_0+\beta_1 x_i)\}}{1+\text{exp}(\beta_0+\beta_1 x_i)}

In general, there are no closed-form solutions, so the ML estimates are obtained by using iterative algorithms such as Newton-Raphson (NR), or Iteratively re-weighted least squares (IRWLS). In Agresti (2013), see section 4.6.1 for GLMs, and for logistic regression, see sections 5.5.4-5.5.5.

Interpretation of Parameter Estimates:

  • exp0) = the odds that the characteristic is present in an observation i when Xi = 0, i.e., at baseline.
  • exp1) = for every unit increase in Xi1, the odds that the characteristic is present is multiplied by exp1). This is similar to simple linear regression but instead of additive change it is a multiplicative change in rate. This is an estimated odds ratio

\dfrac{\text{exp}(\beta_0+\beta_1(x_{i1}+1))}{\text{exp}(\beta_0+\beta_1 x_{i1})}=\text{exp}(\beta_1)

In general, the logistic model stipulates that the effect of a covariate on the chance of “success” is linear on the log-odds scale, or multiplicative on the odds scale.

  • If βj > 0, then expj) > 1, and the odds increase.
  • If βj < 0,then expj) < 1, and the odds decrease.

Inference for Logistic Regression:

  • Confidence Intervals for parameters
  • Hypothesis testing
  • Distribution of probability estimates

_________________________

Example – Student Smoking

The table below classifies 5375 high school students according to the smoking behavior of the student (Z) and the smoking behavior of the student’s parents (Y ). We saw this example in Lesson 3 (Measuring Associations [9] in I × J tables, smokeindep.sas [10] and smokeindep.R [11]).

How many parents smoke?
Student smokes?
Yes (Z = 1)
No (Z = 2)
Both (Y = 1)
400
1380
One (Y = 2)
416
1823
Neither (Y = 3)
188
1168

The test for independence yields X2 = 37.6 and G2 = 38.4 with 2 df (p-values are essentially zero), so Y and Z are related. It is natural to think of Z as a response and Y as a predictor in this example. We might be interested in exploring the dependency of student’s smoking behavior on neither parent smoking versus at least one parent smoking. Thus we can combine or collapse the first two rows of our 3 × 2 table and look at a new 2 × 2 table:

Student
smokes
Student does
not smoke
1–2 parents smoke
816
3203
Neither parent smokes
188
1168

For the chi-square test of independence, this table has X2 = 27.7, G2 = 29.1, p-value ≈ 0, and theta hat = 1.58. Therefore, we estimate that a student is 58% more likely, on the odds scale, to smoke if he or she has at least one smoking parent than none.

But what if:

  • we want to model the “risk” of student smoking as a function of parents’ smoking behavior.
  • we want to describe the differences between student smokers and nonsmokers as a function of parents smoking behavior via descriptive discriminate analyses.
  • we want to predict probabilities that individuals fall into two categories of the binary response as a function of some explanatory variables, e.g. what is the probability that a student is a smoker given that neither of his/her parents smokes.
  • we want to predict that a student is a smoker given that neither of his/her parents smokes, i.e. probabilities that individuals fall into two categories of the binary response as a function of some explanatory variables, we want to classify new students into “smoking” or “nonsmoking” group depending on parents smoking behavior.
  • we want to develop a social network model, adjust for “bias”, analyze choice data, etc…

These are just some of the possibilities of logistic regression, which cannot be handled within a framework of goodness-of-fit only.

Consider the simplest case:

  • Yi binary response, and Xi binary explanatory variable
  • link to 2 × 2 tables and chi-square test of independence

Arrange the data in our running example like this,

yi
ni
1–2 parents smoke
816
4019
Neither parent smokes
188
1356

where yi is the number of children who smoke, ni is the number of children for a given level of parents’ smoking behaviour, and πi = P(yi = 1|xi) is the probability of a randomly chosen student i smoking given parents’ smoking status.  Here i takes values 1 and 2. Thus, we claim that all students who have at least one parent smoking will have the same probability of “success”, and all student who have neither parent smoking will have the same probability of “success”, though for the two groups success probabilities will be different. Then the response variable Y has a binomial distribution:

y_i \sim Bin(n_i,\pi_i)

Explanatory variable X is a dummy variable such that

Xi = 0 if neither parent smokes,
Xi = 1 if at least one parent smokes.

Understanding the use of dummy variables is important in logistic regression.

Think about the following question, then click on the icon to the left display an answer.

Can you explain to someone what is meant by a “dummy variable”?

Then the logistic regression model can be expressed as:

\text{logit}(\pi_i)=\text{log}\dfrac{\pi_i}{1-\pi_i}=\beta_0+\beta_1 X_i, or

\pi_i=\dfrac{\text{exp}(\beta_0+\beta_1 x_i)}{1+\text{exp}(\beta_0+\beta_1 x_i)}

and it says that log-odds of a randomly chosen student smoking is β0 for “smoking parents” = neither, and β0 + β1 for “smoking parents” = at least one parent is smoking.

6.2.2 – Fitting the Model in R

There are different ways to run logistic regression depending on the format of the data. As before, since there are many different options, for details you need to refer to R help. Here are some general guidelines to keep in mind with a simple example outlined in dataformats.R [17] where we created two binary random variables with n number of trials, e.g., n=100,

##create two binary vectors of length 100
x=sample(c(0,1),100, replace=T)
y=sample(c(0,1),100, replace=T)

If data come in a tabular form, i.e., response pattern is with counts (as seen in the previous example), that is data are grouped.

> ##create a 2×2 table with counts
> xytab=table(x,y)
> xytab
y
x    0  1
0 24 29
1 26 21

glm(): Let y be the response variable capturing the number of events with the number of success (y = 1) and failures (y = 0). We need to create a response table (e.g., count) that has count for both the “success” and “failure” out of n number of trials in its columns. Notice that count table below, could be also the number of success y = 1, and then a column computed as n-y

> count=cbind(xytab[,2],xytab[,1])
> count
[,1] [,2]
0   29   24
1   21   26

We also need a categorical predictor variable.

> xfactor=factor(c(“0″,”1”))
> xfactor
[1] 0 1
Levels: 0 1

We need to specify the the response distribution and a link, which in this case is done by specifying family=binomial(“logit”)

> tmp3=glm(count~xfactor, family=binomial(“logit”))
> tmp3

Call:  glm(formula = count ~ xfactor, family = binomial(“logit”))

Coefficients:
(Intercept)     xfactor1
0.1892      -0.4028

Degrees of Freedom: 1 Total (i.e. Null);  0 Residual
Null Deviance:        1.005
Residual Deviance: -4.441e-15     AIC: 12.72

If data come in a matrix form, i.e., subject × variables matrix with one line for each subject, like a database, where data are ungrouped

> xydata=cbind(x,y)
> xydata ## 100 rows, we are showing first 7
x y
[1,] 0 1
[2,] 0 1
[3,] 0 0
[4,] 0 1
[5,] 1 0
[6,] 0 1
[7,] 0 0…..

glm(): We need a binary response variable y and a predictor variable x, which in this case was also binary. We need to specify the the response distribution and a link, which in this case is done by specifying family=binomial(“logit”)

> tmp1=glm(y~x, family=binomial(“logit”))
> tmp1

Call:  glm(formula = y ~ x, family = binomial(“logit”))

Coefficients:
(Intercept)            x
0.1892      -0.4028

Degrees of Freedom: 99 Total (i.e. Null);  98 Residual
Null Deviance:        138.6
Residual Deviance: 137.6     AIC: 141.6

For more options see the course examples and homework. We will follow the R output through to explain the different parts of model fitting.  The outputs from SAS (or from many other software) will be essentially the same.

Example – Student Smoking

Let’s begin with the collapsed 2 × 2 table:

Student
smokes
Student does
not smoke
1–2 parents smoke
816
3203
Neither parent smokes
188
1168

In R, we can use the glm()function and specify the family = binomial(link = logit). See the files smoke.R [18] and the output generated in smoke.out [19].

smokeR_programHere is the R output for the 2 × 2 table that we will use in R for logistics regression:

smokeR_output

Please Note: the table above is different from the one given from the SAS program. We input y and n as data columns in SAS; here we just input data columns as yes and no.

summary(smoke.logistic) gives the following model information:

smokeR_output_02

Model Information & model convergence status

Call:
glm(formula = response ~ parentsmoke, family = binomial(link = logit))

(Dispersion parameter for binomial family taken to be 1)

Number of Fisher Scoring iterations: 2

These sections tells us which dataset we are manipulating, the labels of the response and explanatory variables and what type of model we are fitting (e.g. binary logit), and type of scoring algorithm for parameter estimation. Fisher scoring is a variant of Newton-Raphson method for ML estimation. In logistic regression they are equivalent. Since we are using an iterative procedure to fit the model, that is to find the ML estimates, we need some indication if the algorithm converged.

Overdispersion is an important concept with discrete data. In the context of logistic regression, overdispersion occurs when there are discrepancies between the observed responses yi and their predicted values μ^i=niπ^i and these values are larger than what the binomial model would predict.

If yi ~ Bin(ni, πi), the mean is μi = ni πi and the variance is μi(ni − μi)/ni. Overdispersion means that the data show evidence that the variance of the response yi is greater than μi(ni − μi)/ni.

If overdispersion is present in a dataset, the estimated standard errors and test statistics for individual parameters and the overall goodness-of-fit will be distorted and adjustments should be made. We will look at this briefly later when we look into continuous predictors, but you can also now read Agresti (2013), Sec. 4.7, Agresti (2007), Sec. 4.4.4. and help(summary.glm())

Table of Coefficients:

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -1.82661    0.07858 -23.244  < 2e-16 ***
parentsmoke1  0.45918    0.08782   5.228 1.71e-07 ***

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

This information gives us the fitted model:

logit(πi^)=logπi^1πi^=β0^+β1^Xi=1.837+0.459parentsmoke1

where parentsmoke1 is a dummy variable ((e.g. design variable) that takes value 1 if at least one parents is smoking and 0 if neither is smoking.  

X1 = 1 (“parentsmoke1”) if parent smoking = at least one,
X1 = 0 (“parensmoke0”) if parent smoking = neither

parent smoking =  0 is the baseline. We are modeling here probability of “children smoking”.

Estimated β0=1.827 with a standard error of 0.078 is significant and it says that log-odds of a child smoking versus not smoking if neither parents is smoking (the baseline level) is -1.827 and it’s statistically significant.

Estimated β1=0.459 with a standard error of 0.088 is significant and it says that log-odds-ratio of a child smoking versus not smoking if at least one parent is smoking versus neither parents is smoking (the baseline level) is 0.459 and it’s statistically significant. exp(0.459)=1.58 are the estimated odds-ratios; compare with our analysis in Section 6.2.

Testing Individual Parameters

Testing the hypothesis that the probability of the characteristic depends on the value of the jth variable.

Testing H0 : βj = 0     versus     H1 : βj ≠ 0.

The Wald chi-squared statistics z^2=(\hat{\beta}_j/\text{SE}(\hat{\beta}_k))^2 for these tests are displayed along with the estimated coefficients in the “Coefficients” section.

The standard error for \hat{\beta}_1, 0.0878, agrees exactly with the standard error that you can calculate from the corresponding 2 × 2 table. The values indicate the significant relationship between the logit of the odds of student smoking in parents’ smoking behavior. This information indicates that parents’ smoking behavior is a significant factor in the model. We could compare z2 to a chisquare with one degree of freedom; the p-value would then be the area to the right of z2 under the \chi^2_1 density curve.

A value of z2 (Wald statistic) bigger than 3.84 indicates that we can reject the null hypothesis βj = 0 at the .05-level.

\beta_1:\left(\dfrac{0.4592}{0.0878}\right)^2=27.354

Confidence Intervals of Individual Parameters:

An approximate (1 − α) × 100% confidence interval for βj is given by

\hat{\beta}_j \pm z_{(1-\alpha/2)} \times SE(\hat{\beta}_j)

For example, a 95% confidence interval for β1

0.4592 ± 1.96 × 0.0878 = (0.287112, 0.63128)

where 0.0878 is the “Standard Error”‘ from the “Coefficients” section. Then, the 95% confidence interval for the odds-ratio of a child smoking if one parent is smoking in comparison to neither smoking is

(exp(0.287112), exp(0.63128)) = (1.3325, 1.880)

Thus there is a strong association between parent’s and children smoking behavior. But does this model fit?

Overall goodness-of-fit testing

Test: H0 : current model vs. HA : saturated model

Residual deviance: -3.7170e-13  on 0  degrees of freedom
AIC: 19.242

The goodness-of-fit statistics, deviance, G2 from this model is zero, because the model is saturated. If we want to know the fit of the intercept only model that is provided by

Test: H0 : intercept only model vs. HA : saturated model

Null deviance:  2.9121e+01  on 1  degrees of freedom

Suppose that we fit the intercept-only model. This is accomplished by removing the predictor from the model statement, like this:

glm(response~1, family=binomial(link=logit)

The goodness-of-fit statistics are shown below.

Null deviance: 29.121  on 1  degrees of freedom
Residual deviance: 29.121  on 1  degrees of freedom
AIC: 46.362

The deviance G2 = 29.1207 is precisely equal to the G2 for testing independence in this 2 × 2 table. Thus by the assumption, the intercept-only model or the null logistic regression model states that student’s smoking is unrelated to parents’ smoking (e.g., assumes independence, or odds-ratio=1). But clearly, based on the values of the calculated statistics, this model (i.e., independence) does NOT fit well.

Analysis of deviance table [20]: In R, we can test factors’ effects with the anova function to give an analysis of deviance table. We only include one factor in this model. So R dropped this factor (parentsmoke) and fit the intercept-only model to get the same statistics as above, i.e., the deviance G2 = 29.121.

smokeR_output_03

This example shows that analyzing a 2 × 2 table for association is equivalent to logistic regression with a single dummy variable. We can further compare these tests to the loglinear model of independence, e.g. see smokelog.R [21] in Lesson 10.

The goodness-of-fit statistics, X2 and G2, are defined as before in the tests of independence and loglinear models (e.g. compare observed and fitted values). For the χ2 approximation to work well, we need the ni‘s sufficiently large so that the expected values, μ^i ≥ 5, and niμ^i ≥ 5 for most of the rows i. We can afford to have about 20% of these values less than 5, but none of them should fall below 1.

With real data, we often find that the ni‘s are not big enough for an accurate test, and there is no single best solution to handle this but a possible solution may rely strongly on the data and context of the problem.

6.2.3 – More on Goodness-of-Fit and Likelihood ratio tests

Suppose two alternative models are under consideration, one model is simpler or more parsimonious than the other. ore often than not, one of the models is the saturated model. Another common situation is to consider ‘nested’ models, where one model is obtained from the other one by putting some of the parameters to be zero. Suppose now we test

 H0: reduced model is true vs. HA: current model is true

Notice the difference in the null and alternative hypothesis from the section above. Here to test the null hypothesis that an arbitrary group of k coefficients from the model is set equal to zero (e.g. no relationship with the response), we need to fit two models:

  • the reduced model which omits the k predictors in question, and
  • the current model which includes them.

The likelihood-ratio statistic is

ΔG2 = −2 log L from reduced model
−(−2 log L from current model)

and the degrees of freedom is k (the number of coefficients in question). The p-value is P(\chi^2_k \geq \Delta G^2)

To perform the test, we must look at the “Model Fit Statistics” section and examine the value of “−2 Log L” for “Intercept and Covariates.” Here, the reduced model is the “intercept-only” model (e.g. no predictors) and “intercept and covariates” is the current model we fitted. For our running example, this would be equivalent to testing “intercept-only” model vs. full (saturated) model (since we have only one covariate).

sas_image_01

Larger values of ΔG2 (“−2 Log L” ) lead to small p-values, which provide evidence against the reduced model in favor of the current model; you can explore AIC (Akaike Information Criterion) and SC (Schwarz Criterion) on your own through SAS help files or see Lesson 5 for AIC.

For our example, ΔG2 = 5176.510 − 5147.390 = 29.1207 with df = 2 − 1 = 1. Notice that this matches Deviance we got in the earlier text above.

Another way to calculate the test statistic is

ΔG2 = G2 from reduced model
G2 from current model,

where the G2‘s are the overall goodness-of-fit statistics.

This value of -2 Log L is useful to compare two nested models which differ by an arbitrary set of coefficients.

Also notice that the ΔG2 we calculated for this example equals to

Likelihood Ratio     29.1207    1    <.0001

from “Testing Global Hypothesis: BETA=0” section (the next part of the output, see below).

Testing the Joint Significance of All Predictors.

Testing the null hypothesis that the set of coefficients is simultaneously zero. For example,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\ldots+\beta_k X_k

test H0 : β1 = β2 = … = 0 versus the alternative that at least one of the coefficients β1, . . . , βk is not zero.

This is like the overall F−test in linear regression. In other words, this is testing the null hypothesis that an intercept-only model is correct,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0

versus the alternative that the current model is correct

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\ldots+\beta_k X_k

R logo  Residual deviance is the difference in G2 = −2 logL between a saturated model and the built model. The high residual deviance shows that the model cannot be accepted. The null deviance is the difference in G2 = −2 logL between a saturated model and the intercept-only model. The high residual deviance shows that the intercept-only model does not fit.

residualdevianceR_output

In our 2 × 2 table smoking example, the residual deviance is almost 0 because the model we built is the saturated model. And notice that the degree of freedom is 0, too. Regarding the null deviance, we could see it equivalent to the section “Testing Global Null Hypothesis: Beta=0,” by likelihood ratio in SAS output.

For our example, Null deviance = 29.1207 with df = 1. Notice that this matches Deviance we got in the earlier text above.

The Homer-Lemeshow Statistic

An alternative statistic for measuring overall goodness-of-fit is Hosmer-Lemeshow statistic.

Note: we use one predictor model here, that is, at least one parent smokes.

This is a Pearson-like χ2 that is computed after data are grouped by having similar predicted probabilities. It is more useful when there is more than one predictor and/or continuous predictors in the model too. We will see more on this later, and in your homework.

H0 : the current model fits well
HA : the current model does not fit well

To calculate this statistic:

  1. Group the observations according to model-predicted probabilities ( π^i)
  2. The number of groups is typically determined such that there is roughly an equal number of observations per group
  3. Hosmer-Lemeshow (HL) statistic, a Pearson-like chi-square statistic, is computed on the grouped data, but does NOT have a limiting chi-square distribution because the observations in groups are not from identical trials. Simulations have shown, that this statistic can be approximated by chi-squared distribution with df = g − 2 where g is the number of groups.

Warning about Hosmer-Lemeshow goodness-of-fit test:

  • It is a conservative statistic, i.e. its value is smaller than what it aught to be and therefore rejection probability of the null hypothesis is smaller.
  • It has low power in predicting a certain types of lack of fit such as nonlinearity in explanatory variable
  • It is highly dependent on how the observations are grouped
  • If too few groups are used (e.g. 5 or less) it almost always indicates that the model fits the data; this means that it’s usually not a good measure if you only have one or two categorical predictor variables, and it’s best used for continuous predictors.

In the model statement, the option lackfit tells SAS to compute HL statistics and print the partitioning.

For our example, because we have small number of groups (e.g, 2), this statistic gives a perfect fit (HL = 0, p-value = 1).

______

Instead of deriving the diagnostics we will look at them from a purely applied viewpoint. Recall the definitions and introductions to the regression residuals and Pearson and Deviance residuals.

Residuals

The Pearson residuals are defined as

r_i=\dfrac{y_i-\hat{\mu}_i}{\sqrt{\hat{V}(\hat{\mu}_i)}}=\dfrac{y_i-n_i\hat{\pi}_i}{\sqrt{n_i\hat{\pi}_i(1-\hat{\pi}_i)}}

The contribution of the ith row to the Pearson statistic X2 is

\dfrac{(y_i-\hat{\mu}_i)^2}{\hat{\mu}_i}+\dfrac{((n_i-y_i)-(n_i-\hat{\mu}_i))^2}{n_i-\hat{\mu}_i}=r^2_i

and the Pearson goodness-of fit statistic is

X^2=\sum\limits_{i=1}^N r^2_i

which we would compare to a \chi^2_{N-p} distribution. The deviance test statistic is

2\left\{ y_i\text{log}\left(\dfrac{y_i}{\hat{\mu}_i}\right)+(n_i-y_i)\text{log}\left(\dfrac{n_i-y_i}{n_i-\hat{\mu}_i}\right)\right\}

We will note how these quantities are derived through appropriate software and how they provide useful information to understand and interpret the models.  For an example see the R analysis in the next section.

6.2.4 – Explanatory Variable with Multiple Levels

The concepts discussed with binary predictors extend to predictors with multiple levels. In this lesson we consider,

  • Yi binary response, Xi discrete explanatory variable (with k = 3 levels), and
  • link to the analysis of 2 × 3 tables

but the basic ideas easily extend to any 2 × J table.

We begin by replicating the analysis of the original 3 × 2 tables with logistic regression.

How many parents smoke?
Student smokes?
Yes (Z = 1)
No (Z = 2)
Both (Y = 1)
400
1380
One (Y = 2)
416
1823
Neither (Y = 3)
188
1168

First, we re-express the data in terms of yi = number of smoking students, and ni = number of students for the three groups based on parents behavior:

How many parents smoke?
Student smokes?
yi
ni
Both
400
1780
One
416
2239
Neither
188
1356

Then we decide on a baseline level for the explanatory variable X, and create k − 1 dummy indicators if X is a categorical variable with k levels.

For our example, let’s parent smoking = Neither be a baseline, and define a pair of dummy indicators (or design variables) that takes one of two values,

X1 = 1 if parent smoking = One ,
X1 = 0 if otherwise,

X2 = 1 if parent smoking = Both,
X2 = 0 if otherwise.

Let π1π= odds of student smoking; notice that we dropped the index i denoting each individual student to simplify the notation — we are still modeling each student or rather a randomly chosen student from the same classification cell. Then the model

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2

says that the log-odds of a student smoking is β0 for parents smoking = neither, β0 + β1 for parents smoking = one and β0 + β2 for parents smoking = both, and expj) = conditional odds ratio for level j of predictor X versus the baseline. Therefore,

2014-09-27_smoking1

and we expect to get β^1=ln(1.42)=0.351, e.g. 1.42=(416 × 1168)/(188 × 1823), and β^2=ln(1.80)=0.588, e.g., 1.80=400 × 1168/188 × 1380. The estimated intercept should be β^0=ln(188/1168)=1.826.

Fitting A Multi-level Predictor Model in R

.

R logo  Below is the R code (smoke.R [18]) that replicates the analysis of the original 2 × 3 table with logistic regression.

smokeR_program_02First, let’s see the table we created for the analysis.

smokeR_output_04You should again notice here we just input data columns as yes and no.

Next, the model information is displayed.

smokeR_output_05If you don’t specify the baseline using the “class” option, R will set the first level as the default. Here, it’s “0”. the parentsmoke1 and parentsmoke2 correspond to the X1 and X2 dummy variables. The saturated model is:

\text{logit}(\pi)=-1.8266+0.3491 X_1+0.5882 X_2

and the estimated probability of a child smoking given the explanatory variable:

\hat{\pi}_i=\dfrac{\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}{1+\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}

For example, the predicted probability of a student smoking given that only one parent is smoking is:

2014-09-27_predictive

Residuals

In R, deviance residuals are given directly in the output by summary() function.

> smoke.logistic<-glm(response~parentsmoke, family=binomial(link=logit))

> summary(smoke.logistic)

Call:
glm(formula = response ~ parentsmoke, family = binomial(link = logit))

Deviance Residuals:
[1]  0  0  0

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -1.82661    0.07858 -23.244  < 2e-16 ***
parentsmoke1  0.34905    0.09554   3.654 0.000259 ***
parentsmoke2  0.58823    0.09695   6.067  1.3e-09 ***

We can also obtain the deviance and Pearson residuals by using residulas.glm() function. Do help(residuals.glm()) to study this object more.

To obtain deviance residuals

> residuals(smoke.logistic)
[1] 0 0 0

To obtain Person residuals:

> residuals(smoke.logistic, type=”pearson”)
1             2             3
-2.440787e-13 -4.355932e-13 -1.477321e-11

To obtain the predicted values, use function predict.glm() with which we can specify the type of predicted values we want.

type    is the type of prediction required. The default is on the scale of the linear predictors; the alternative “response” is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and type = “response” gives the predicted probabilities. The “terms” option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale.

For example, the code below gives predicted probabilities.

> predict(smoke.logistic, type=”response”)
1         2         3
0.2247191 0.1857972 0.1386431

All of the predicted number of success and failures are greater than 5.0 so the χ2 approximation is trustworthy.

Overall goodness-of-fit

The goodness-of-fit statistics Χ2 and G2 from this model are both zero, because the model is saturated. Suppose that we fit the intercept-only model as before by removing the predictors from the model statement:

R logo

nulldevianceR_output

The residual deviance is almost 0 because the model is saturated. We can find G2 = 38.366 by null deviance, which are precisely equal to the usual G2 for testing independence in the 2 by 3 table. Since this statistics is large which leads to small p-values, it provides evidence against the intercept-only model in favor of the current model. You can also find the same result in the output from the anova() part.

Clearly, in this example, the saturated model fits perfectly and the independence model does not fit well.

We can use the Hosmer-Lemeshow test to assess the overall fit of the model. As in the previous example, the Hosmer and Lemeshow statistic is not very meaningful since the number of groups is small.

Here is a link to a program in R that produces this Hosmer and Lemeshow statistic (ROCandHL.R [25])

output here…

We have shown that analyzing a 2 × 3 table for associations is equivalent to a binary logistic regression with two dummy variables as predictors. For 2 × J tables, we would fit a binary logistic regression with J − 1 dummy variables.

Testing the Joint Significance of All Predictors.

In our model

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2

Test H0 : β1 = β2 = 0 versus the alternative that at least one of the coefficients β1, . . . , βk is not zero. In other words, this is testing the null hypothesis that an intercept-only model is correct,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0

versus the alternative that the current model is correct

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2

sas_image_11

This test has k degrees of freedom (e.g. the number of dummy or indicator variables, that is the number of β-parameters (except the intercept).

Large chisquare statistics lead to small p-values and provide evidence against the intercept-only model in favor of the current model.

Testing for an Arbitrary Group of Coefficients

For the previous test this would be equivalent to testing “intercept-only” model vs. full (saturated) model.

The likelihood-ratio statistic is:

ΔG2 = −2 log L from reduced model
−(−2 log L from current model)

and the degrees of freedom is k (the number of coefficients in question). The p-value is P(\chi^2_k \geq \Delta G^2).

sas_image_12

For our example, ΔG2 = 5176.510 − 5138.144 = 38.3658 with df = 3 − 1 = 2.

Notice that this matches:

Likelihood Ratio      38.3658      2      <.0001

from the “Testing Global Hypothesis: BETA=0” section.

Discuss   How would you test only for X2? Which two models can you compare?    How do you interpret this case?

Testing Individual Parameters

Testing a hypothesis that the probability of the characteristic depends on the value of the jth variable.

Testing H0 : βj = 0 versus H1 : βj ≠ 0.

A value of z2 bigger than 3.84 indicates that we can reject the null hypothesis βj = 0 at the .05-level.

R logo

R output

\beta_1:\left(\dfrac{0.3491}{0.0955}\right)^2=13.3481

A value of z bigger than 1.96 or small p-value indicates that we can reject the null hypothesis βj = 0 at the .05-level.

Confidence Intervals: An approximate (1 − α) × 100% confidence interval for j is given by

\hat{\beta}_j \pm z_{(1-\alpha/2)} \times SE(\hat{\beta}_j)

For example, a 95% CI for β1

0.3491 ± 1.96 × 0.0955 = (0.16192, 0.5368)

Then, the 95% CI for the odds-ratio of a student smoking if one parent is smoking in comparison to neither smoking is:

(exp(0.16192), exp(0.5368)) = (1.176, 1.710)

Student and parents’ smoking behaviors are dependent. Furthermore,

sas_image_13

  • estimated conditional odds ratio of a student smoking between one parent smoking and neither smoking: exp1) = exp(0.3491) = 1.418,
  • estimated conditional odds ratio of a student smoking between both parents smoking and neither smoking: exp2) = exp(0.5882) = 1.801, and
  • estimated conditional odds ratio of a student smoking between both parents smoking and one smoking: exp2)/exp1) = 1.801/1.418 = 1.27 = exp2 − β1) = exp(0.5882 − 0.3491) = 1.27, i.e., a student who has both parents smoking is 27% more likely (on the odds scale) to smoke than the a student who has only one parent smoking.

6.3 – Binary Logistic Regression for Three-way and k-way tables

Binary Logistic Regression for Three-way and k-way tables

This section will focus on the following:

Key Concepts

  • Binary Logistic Regression for 2 × I × J tables
  • Multiple Logistic Regression with categorical explanatory variables

Objectives

  • Understand the basic ideas behind modeling binary response as a function of two or more categorical explanatory variables.

6.3.1 – Connecting Logistic Regression to the Analysis of Two- and Three-way Tables

Recall the 3 × 2 × 2 table that we examined in Lesson 5 [26] that classifies 800 boys according to S = socioeconomic status, B = whether a boy is a scout, and D = juvenile delinquency status.

Socioeconomic
status
Boy scout
Delinquent
Yes
No
Low
Yes
11
43
No
42
169
Medium
Yes
14
104
No
20
132
High
Yes
8
196
No
2
59

Because the outcome variable D is binary, we can express many models of interest using binary logistic regression.

Before handling the full three-way table, let us consider the 2 × 2 marginal table for B and D as we did in Lesson 5 [26]. We concluded that the boy scout status (B) and the delinquent status (D) are dependent and that

Boy scout
Delinquent
Yes
No
Yes
33
343
No
64
360

the estimated log-odds ratio is

\text{log}\left(\dfrac{33\times 360}{64 \times 343}\right)=-0.6140

with a standard error of \sqrt{\dfrac{1}{33}+\dfrac{1}{343}+\dfrac{1}{64}+\dfrac{1}{360}}=0.2272. That is, we estimate that being a boy scout lowers the log-odds of delinquency by 0.614; the odds-ratio is 0.541.

Now let’s fit a logistic regression model,

\text{log}\left(\dfrac{\pi_i}{1-\pi_i}\right)=\beta_0+\beta_1 X_i

where Xi is a dummy variable

Xi = 0 if non-scout,
Xi = 1 if scout.

R logo The R code for this model and other models in this section for boy scout example, see the R program scout.R [28].

Here is the corresponding R code for scout1.sas that will produce the 2 × 2 table:

scoutR_01_programPart of the R output includes:

scoutR_01_outputThe baseline for this model is “nonscout” because it comes before “scout” in the alphabetical order. The estimated coefficient of the dummy variable,

\hat{\beta}_1=−0.6140

is identical to the log-odds ratio from the analysis of the 2 × 2 table. The standard error for β^1, 0.2272, is identical to the standard error that came from the 2 × 2 table analysis.

Also, in this model, β0 is the log-odds of delinquency for nonscouts (Xi = 0). Looking at the 2 × 2 table, the estimated log-odds for non-scouts is:

log(64/360) = −1.7272

which agrees with \hat{\beta}_0 from the logistic model.

The goodness-of-fit statistics are shown below:

scoutR_01_output_02As we discussed earlier in other R output, the residual deviance is almost 0 because the model is saturated. The null deviance is the G2 that corresponds to deviance goodness-of-fit statistics in SAS output. Here, the deviance G2 = 7.6126 is identical to the G2 for testing independence in the 2 × 2 table.

Note: Thus we have shown again that analyzing a 2 × 2 table for association is equivalent to logistic regression with a dummy variable. Next let us look at the rest of the data and generalize these analyses to I × 2 tables and I × J × 2 tables.

Now let us do a similar analysis for the 3 × 2 table that classifies subjects by S and D:

Socioeconomic
status
Delinquent
Yes
No
Low
53
212
Medium
34
236
High
10
255

Two odds ratios of interest are

(53 × 236) / (34 × 212) = 1.735,
(53 × 255) / (10 × 212) = 6.375.

We estimate that the odds of delinquency for the S = low group are 1.735 times as high as for the S = medium group, and 6.375 times as high as for the S = high group. The estimated log odds ratios are

log 1.735 = .5512 and log 6.375 = 1.852,

and the standard errors are

\sqrt{\dfrac{1}{53}+\dfrac{1}{212}+\dfrac{1}{34}+\dfrac{1}{236}}=0.2392

\sqrt{\dfrac{1}{53}+\dfrac{1}{212}+\dfrac{1}{10}+\dfrac{1}{255}}=0.3571

Now let us replicate this analysis using logistic regression. First, we re-express the data in terms of yi = number of delinquents and ni = number of boys for the three S-groups:

yi
ni
Low
53
265
Medium
34
270
High
10
265

Then we define a pair of dummy indicators,

X1 = 1 if S=medium,
X1 = 0 otherwise,

X2 = 1 if S=high,
X2 = 0 otherwise.

Let π = probability of delinquency. Then the model

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2

says that the log-odds of delinquency are β0 for S = low, β0 + β1 for S = medium, and β0 + β2 for S = high.

R logo  Here is the part in the R program file scout.R [28] that corresponds to scout2.sas program.

scoutR_02_program

Notice that we only use “Smedium” and “Shigh” in the model statement in glm(). So we set the baseline as “low” for this model.

R output:

scoutR_02_output

The null deviance is the G2 that corresponds to deviance goodness-of-fit statistics found in the SAS output. Here, the deviance G2 = 36.252. So, we can conclude that the delinquency is related to socioeconomic status. The test of the global null hypothesis H1 = H2 = 0 is equivalent to the usual test for independence in the 3 × 2 table. The estimated coefficients and SE’s are as we predicted, and the estimated odds ratios are

exp(−.5512) = 0.576 = 1/1.735,
exp(−1.852) = 0.157 = 1/6.375.

Notice that we did not say anything here about the scout status. We have “ignored” that information because we collapsed over that variable.  Next we will see how this plays out with logistic regression.

6.3.2 – Collapsing and Goodness of Fit

In this example, we could have also arranged the input data like this:

S
B
yi
ni
low
scout
11
54
low
Non-scout
42
211
medium
scout
14
118
medium
Non-scout
20
152
high
scout
8
204
high
Non-scout
2
61

6.3.3 – Different Logistic Regression Models for Three-way Tables

In this part of the lesson we will consider different binary logistic regression models for three-way tables and their link to log-linear models. Let us return to the 3 × 2 × 2 table:

Socioeconomic
status
Boy scout
Delinquent
Yes
No
Low
Yes
11
43
No
42
169
Medium
Yes
14
104
No
20
132
High
Yes
8
196
No
2
59

As we discussed in Lesson 5 [26], there are many different models that we could fit to this table. But if we think of D as a response and B and S as potential predictors, we can focus on a subset of models that make sense.

Let π be the probability of delinquency. The simplest model worth considering is the null or intercept-only model,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0

in which D is unrelated to B or S. If we were to fit this model in PROC LOGISTIC using the disaggregated data (all six lines), we would find that the X2 and G2 statistics are identical to those we obtained in Lesson 5 [26] from testing the null hypothesis “S and B are independent of D.

That is, testing the overall fit of model (1), i.e., intercept only model, is equivalent to testing the fit of the log-linear model (D, SB), because (1) says that D is unrelated to S and B but makes no assumptions about whether S and B are related.

After (1), we may want to fit the logit model that has a main effect for B,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1

where

X1 = 1 if B=scout,
X1 = 0 otherwise.

If the data are disaggregated into six lines, the goodness-of-fit tests for model (2) will be equivalent to the test for (DB, SB) log-linear model, which says that D and S are conditionally independent given B.

This makes sense, because (2) says that S has no effect on D once B has been taken into account.

The model that has main effects for S,

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_2 X_2+\beta_3 X_3

where

X2 = 1 if S = medium,
X2 = 0 otherwise,

X3 = 1 if S = high,
X3 = 0 otherwise,

says that B has no effect on D once S has been taken into account. The goodness-of-fit tests for (3) are equivalent to testing the null hypothesis that (DS, BS) fits, i.e. that D and B are conditionally independent given S.

The logit model

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3

has main effects for both B and S, corresponds to the model of homogeneous association which we discussed in Lesson 5 [26]. We could not fit the model at that time, because the ML estimates have no closed-form solution, but we calculated CMH statistic and Breslow-Day statistic. But with logistic regression software, fitting this model is no more difficult than for any other model.

This model says that the effect of B on D, when expressed in terms of odds ratios, is identical across the levels of S. Equivalently, it says that the odds ratios describing the relationship between S and D are identical across the levels of B. If this model does not fit, we have evidence that the effect of B on D varies across the levels of S, or that the effect of S on D varies across the levels of B.

Finally, the saturated model can be written as:

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\beta_3X_3+\beta_4X_1X_2+\beta_5 X_1X_3

which has the main effects for B and S and their interactions. This model has X2 = G2 = 0 with zero degrees of freedom. If time permits, you should try to fit these in R.

So how do we choose between these five models? Which would be the best fit, most parsimonious model? That’s next.

6.3.4 – Analysis of Deviance and Model Selection

In this part of the lesson we will focus on model selection. Fitting all the models from the previous section to the delinquency data, we obtain the following fit statistics.

Model
X2
G2
df
Saturated
0.00
0.00
0
S + B
0.15
0.15
2
S
0.16
0.16
3
B
28.80
27.38
4
Null (intercept only)
36.41
32.96
5

The above table is an example of “analysis-of-deviance” table. This is like ANOVA table you have seen in linear regressions or similar models, where we look at the difference in the fit statistics, e.g. F-statistic, due to dropping or adding a parameter. In this case, we are checking for the change in deviance and if it is significant or not.

If a pair of models is nested (i.e. the smaller model is a special case of the larger one) then we can test

H0 : smaller model is true

versus

H1 : larger model is true

by doing likelihood ratio testing, and comparing

ΔG2 = G2 for smaller model − G2 for larger model

or

Δ X2 = X2 for smaller model − X2 for larger model

to a χ2 distribution with degrees of freedom equal to

Δdf = df for smaller model

df for larger model.

This is exactly similar to testing whether a reduced model is true versus whether the full-model is true, for linear regression. Recall that full model has more parameters and setting some of them equal to zero the reduced model is obtained. Such models are nested, or hierarchical. The method described here holds ONLY for nested models.

For example, any model can be compared to the saturated model. The Null (intercept-only) model can be compared to any model above it. The S model or B model may be compared to S + B. But the S model cannot be directly compared to the B model because they are not nested.

The goal is to find a simple model that fits the data. For this purpose, it helps to add p-values to the analysis-of-deviance table. Here are the p-values associated with the G2 statistics:

Model
G2
df
p
Saturated
0.00
0
S + B
0.15
2
.928
S
0.16
3
.984
B
28.80
4
.000
Null (intercept only)
36.41
5
.000

From this table, we may conclude that:

  • The Null model clearly does not fit.
  • Adding B to the Null model drops the deviance by 36.41 − 28.80 = 7.61, which is highly significant because P(\chi^2_1 \geq 7.61)=0.006. So the B model fits significantly better than the Null model. But the B model still is not a good fit since the goodness-of-fit chi-square value is very large.
  • Adding S to the Null model drops the deviance by 36.41 − 0.16 = 36.25, and P(\chi^2_2 \geq 36.25)\approx 0. So the S model fits significantly better than the Null model. And the S model fits the data very well.
  • Adding B to the S model (i.e., comparing S + B to S) drops the deviance by only .01. So the fit of S + B is not significantly better than S. Both of these models fit the data well. (If S fits, then S + B must also fit, because S is a special case of S + B.) Given that both of these models fit, we prefer the S model, because it’s simpler.

Based on this table, we should choose to represent our data by the S model, because it’s the simplest one that fits the data well, thus D is conditionally independent of B given S.

But let us temporarily turn our attention to the saturated model to see how we can interpret the parameters. We can write this model as

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\beta_4 X_1 X_2+\beta_5 X_1 X_3

where π is the probability of delinquency,

X1 = 1 if B = scout,
X1 = 0 otherwise

is the main effect for B, and

X2 = 1 if S = medium,
X2 = 0 otherwise,

X3 = 1 if S = high,
X3 = 0 otherwise

are the main effects for S.

R logo  Here is the R code in the R program scout.R [28]:

scoutR_4_programThe coefficient estimates from the output are shown below:

scoutR_4_outputHow do we interpret these effects? Referring back to equation (5),

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\beta_4 X_1 X_2+\beta_5 X_1 X_3

we see that the log-odds of delinquency for each S × B group are:

S
B
log-odds
low scout β0 + β1
low non-scout β0
medium scout β0 + β1+ β2 + β4
medium non-scout β0 + β2
high scout β0 + β1+ β3 + β5
high non-scout β0 + β3

Therefore,

β1 = log-odds for S = low, B = scout,
−log-odds for S = low, B = non-scout.

In other words, β1 gives the effect of scouting on delinquency when S = low. The estimate of β1 in the SAS output agrees with the B × D odds ratio for S = low,

\text{log}\left(\dfrac{11\times 169}{42 \times 43}\right)=0.0289

The effect of scouting for S = medium, however, is

β1 + β4 = log-odds for S = medium, B = scout,
−log-odds for S = medium, B = non-scout.

and the effect of scouting for S = high is

β1 + β5 = log-odds for S = high, B = scout,
−log-odds for S = high, B = non-scout.

Estimates of these latter two effects do not directly appear in the SAS output. How can we get them?

One way is to simply calculate them from the individual \hat{\beta}_j's, and then get their standard errors from the elements of the estimated covariance matrix. For example, the estimated standard error for \hat{\beta}_1+\hat{\beta}_4 is:

\sqrt{\hat{V}(\hat{\beta}_1)+\hat{V}(\hat{\beta}_4)+2 \hat{Cov}(\hat{\beta}_1,\hat{\beta}_4)}

Another way is to recode the model so that the estimates of interest and their standard errors appear directly in the table of coefficients. Suppose that we define the following dummy variables:

X1 = 1 if S = low, 0 otherwise
X2 = if S = low and B = scout, 0 otherwise
X3 = 1 if S = medium, 0 otherwise
X4 = 1 if S = medium and B = scout, 0 otherwise
X5 = 1 if S = high, 0 otherwise
X6 = 1 if S = high and B = scout, 0 otherwise

Then we fit the model

\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\beta_4 X_4 +\beta_5 X_5+\beta_6 X_6

Notice that this new model does not include an intercept; an intercept would cause a collinearity problem, because X1 + X3 + X5 = 1. Under this new coding scheme, the log-odds of delinquency for each S × B group are:

S
B
log-odds
low scout β1 + β2
low non-scout β1
medium scout β3 + β4
medium non-scout β3
high scout β5 + β6
high non-scout β5

Therefore,

β2 = effect of scouting when S = low,
β4 = effect of scouting when S = medium,
β6 = effect of scouting when S = high.

Here is the R code from the R program scout.R [28] that corresponds to

scoutR_5_programIn R, you can exclude the intercept by including “-1” in the formula as seen in the code above. The estimated table of coefficients is shown below.

smokeR_output_05

I will leave it to you to verify that the estimates and standard errors for β2, β4 and β6 correspond to the log-odds ratios and standard errors that we get from analyzing the B × D tables for S = low, S = medium and S = high.

These type of analysis, models and parameter interpretations extend to any k−way table. More specifically tables where we have a binary response and many other categorical variables that can have many different levels.

This is a special example of a multiple logistic regression where we have more than one explanatory variable, but they are all categorical.

Regardless of the predictor variables being categorical, continuous or a combination, when dealing with multiple predictors, model selection becomes important. With logistic regression as in ordinary multiple linear regression, we can use automated procedures such as Stepwise Procedure or Backward Elimination. These are analogous to those in ordinary multiple regression, but with a change in statistic used.

However, the more optimal procedure for logistic regression would be to use Likelihood ratio test (LRT) for testing elimination of variables, as we described with the boys scout example. If there are many categorical predictors, the sparseness can be a problem for these automated algorithms. You can also use AIC and BIC for model selection. These will be discussed along with the log-linear models later.  For an example of a larger dataset, and more on Model Selection, see handouts relevant for the Water Level Study data (water.sas and water.txt):

Go to Case Studies: The Water Level Study [31]; or you can also see relevant files on the SAS supplemental pages [32] (e.g., water_level1.sas, water_level2.sas). For corresponding R files see R supplemental pages [33] (e.g., water.R)


https://onlinecourses.science.psu.edu/stat504/node/149