The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:models_with_or_without_intercept

Meta-Regression Models With or Without an Intercept

A question that comes up frequently is the proper interpretation of meta-regression models with or without an intercept term. Below I illustrate the difference using the dataset for the BCG vaccine meta-analysis (Colditz et al., 1994).

Model With Intercept

The dataset is called dat.bcg. The variables tpos and tneg in the dataset indicate the number of tuberculosis cases and non-cases in the treated (vaccinated) groups, respectively. Similarly, variables cpos and cneg indicate the number of cases and non-cases in the control (non-vaccinated) groups. Based on these four variables, the (log) risk ratios and corresponding sampling variances for the 13 studies can be obtained with:

library(metafor)
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
dat
   trial               author year tpos  tneg cpos  cneg ablat      alloc      yi     vi
1      1              Aronson 1948    4   119   11   128    44     random -0.8893 0.3256
2      2     Ferguson & Simes 1949    6   300   29   274    55     random -1.5854 0.1946
3      3      Rosenthal et al 1960    3   228   11   209    42     random -1.3481 0.4154
4      4    Hart & Sutherland 1977   62 13536  248 12619    52     random -1.4416 0.0200
5      5 Frimodt-Moller et al 1973   33  5036   47  5761    13  alternate -0.2175 0.0512
6      6      Stein & Aronson 1953  180  1361  372  1079    44  alternate -0.7861 0.0069
7      7     Vandiviere et al 1973    8  2537   10   619    19     random -1.6209 0.2230
8      8           TPT Madras 1980  505 87886  499 87892    13     random  0.0120 0.0040
9      9     Coetzee & Berjak 1968   29  7470   45  7232    27     random -0.4694 0.0564
10    10      Rosenthal et al 1961   17  1699   65  1600    42 systematic -1.3713 0.0730
11    11       Comstock et al 1974  186 50448  141 27197    18 systematic -0.3394 0.0124
12    12   Comstock & Webster 1969    5  2493    3  2338    33 systematic  0.4459 0.5325
13    13       Comstock et al 1976   27 16886   29 17825    33 systematic -0.0173 0.0714

Values of the log risk ratio (variable yi) below 0 indicate that the infection risk was lower in the vaccinated group compared to the control group.

As we can see above, 7 studies used random assignment to determine which study participants would receive the BCG vaccine, 2 studies used alternating assignment (e.g., first person is vaccinated, second person isn't, third person is vaccinated, next person isn't, and so on), and finally 4 studies used systematic assignment (e.g., where assignment to the vaccination group occurred on the basis of the birth year, as in Comstock et al., 1974).

We will now fit a (mixed-effects) meta-regression model to these data, including the method of treatment allocation (alloc) as a predictor/moderator. Note that alloc is a character variable, which will be treated as a factor in the model. The standard way of handling a factor in regression models is to create dummy variables representing the various levels. One of these "dummies" is then left out of the model, with the corresponding level becoming the "reference" level. Fortunately, we do not have to do this coding ourselves, as R will handle this for us.

res <- rma(yi, vi, mods = ~ factor(alloc), data=dat)
res
Mixed-Effects Model (k = 13; tau^2 estimator: REML)
 
tau^2 (estimated amount of residual heterogeneity):     0.3615 (SE = 0.2111)
tau (square root of estimated tau^2 value):             0.6013
I^2 (residual heterogeneity / unaccounted variability): 88.77%
H^2 (unaccounted variability / sampling variability):   8.91
R^2 (amount of heterogeneity accounted for):            0.00%
 
Test for Residual Heterogeneity:
QE(df = 10) = 132.3676, p-val < .0001
 
Test of Moderators (coefficients 2:3):
QM(df = 2) = 1.7675, p-val = 0.4132
 
Model Results:
 
                         estimate      se     zval    pval    ci.lb   ci.ub
intrcpt                   -0.5180  0.4412  -1.1740  0.2404  -1.3827  0.3468
factor(alloc)random       -0.4478  0.5158  -0.8682  0.3853  -1.4588  0.5632
factor(alloc)systematic    0.0890  0.5600   0.1590  0.8737  -1.0086  1.1867
 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

By default, alternate was made the reference level (since the first letter "a" comes before "r" and "s" in the alphabet), so the corresponding dummy variable for this level has been left out.1) Therefore, the model intercept (coefficients $b_0$) is the estimated (average) log risk ratio for alternating allocation. The coefficients for factor(alloc)random (coefficient $b_1$) and factor(alloc)systematic (coefficient $b_2$) indicate how much lower/higher the estimated (average) log risk ratio is for random and systematic allocation, respectively, compared to alternating allocation. So these coefficients reflect contrasts between these two levels compared to the reference level. Neither coefficient is significantly different from 0 (as indicated by the corresponding p-values), but this isn't a proper test of the factor as a whole.

The results given under "Test of Moderators" provides a joint or omnibus test of all model coefficients except for the intercept (coefficients 2:3). Careful: The numbering of the coefficients here starts at 1, corresponding to the intercept (intrcpt), with coefficients 2 and 3 corresponding to factor(alloc)random and factor(alloc)systematic. The results indicate that we cannot reject the null hypothesis $\mbox{H}_0{:}\; \beta_1 = \beta_2 = 0$. Since we cannot reject the null hypothesis that random and systematic allocation are different from alternating allocation, then this implies that the three-level factor as a whole is not significant.

We can also test one or multiple model coefficients ourselves (and contrasts between them, as we will see further below) using the anova() function. The same omnibus test as above can be obtained with:

anova(res, btt=2:3)
Test of Moderators (coefficients 2:3):
QM(df = 2) = 1.7675, p-val = 0.4132

The btt argument stands for "betas to test" and is used to specify which coefficients we want include in the test.

A different way of conducting the same test is to use the L argument, which allows us to specify one or more vectors of numbers, which are multiplied with the model coefficients. In particular, we can use:

anova(res, X=rbind(c(0,1,0),c(0,0,1)))

to test the two hypotheses \begin{align} &H_0: (0 \times \beta_0) + (1 \times \beta_1) + (0 \times \beta_2) = 0 \; \Leftrightarrow \; \beta_1 = 0 \\ &H_0: (0 \times \beta_0) + (0 \times \beta_1) + (1 \times \beta_2) = 0 \; \Leftrightarrow \; \beta_2 = 0 \end{align} and the output then again includes the omnibus test of both of these two hypotheses:

Hypotheses:
1:     factor(alloc)random = 0
2: factor(alloc)systematic = 0
 
Results:
   estimate     se    zval   pval
1:  -0.4478 0.5158 -0.8682 0.3853
2:   0.0890 0.5600  0.1590 0.8737
 
Omnibus Test of Hypotheses:
QM(df = 2) = 1.7675, p-val = 0.4132

As discussed above, the coefficients for random ($\beta_1$) and systematic ($\beta_2$) reflect contrasts between these two levels and alternating allocation, that is, if we denote $\mu_a$, $\mu_r$, and $\mu_s$ as the (average) log risk ratio for alternating, random, and systematic allocation, respectively, then the model coefficients represent \begin{align} &\beta_0 = \mu_a \\ &\beta_1 = \mu_r - \mu_a \\ &\beta_2 = \mu_s - \mu_a. \end{align} But what about the contrast between systematic and random allocation? It turns out that we can obtain this from the model as the difference between the $\beta_1$ and $\beta_2$ coefficients. In particular, if we subtract $\beta_1$ from $\beta_2$, then $$ \beta_2 - \beta_1 = (\mu_r - \mu_a) - (\mu_s - \mu_a) = \mu_r - \mu_s $$ so this contrast reflects how different systematic allocation is compared to random allocation. Using the anova() function, we can obtain this contrast with

anova(res, X=c(0,-1,1))
Hypothesis:
1: -factor(alloc)random + factor(alloc)systematic = 0
 
Results:
   estimate     se   zval   pval
1:   0.5369 0.4364 1.2303 0.2186
 
Test of Hypothesis:
QM(df = 1) = 1.5137, p-val = 0.2186

The results of the omnibus test of a factor will be identical regardless of which level is chosen as the reference level. To illustrate this, let's make random allocation the reference level, which we can do using the relevel() function:

res <- rma(yi, vi, mods = ~ relevel(factor(alloc), ref="random"), data=dat)
res
Mixed-Effects Model (k = 13; tau^2 estimator: REML)
 
tau^2 (estimated amount of residual heterogeneity):     0.3615 (SE = 0.2111)
tau (square root of estimated tau^2 value):             0.6013
I^2 (residual heterogeneity / unaccounted variability): 88.77%
H^2 (unaccounted variability / sampling variability):   8.91
R^2 (amount of heterogeneity accounted for):            0.00%
 
Test for Residual Heterogeneity:
QE(df = 10) = 132.3676, p-val < .0001
 
Test of Moderators (coefficients 2:3):
QM(df = 2) = 1.7675, p-val = 0.4132
 
Model Results:
 
            estimate      se     zval    pval    ci.lb    ci.ub
intrcpt      -0.9658  0.2672  -3.6138  0.0003  -1.4896  -0.4420  ***
alternate     0.4478  0.5158   0.8682  0.3853  -0.5632   1.4588
systematic    0.5369  0.4364   1.2303  0.2186  -0.3184   1.3921
 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(I shortened the names of the coefficients in the output above to make the table under the Model Results more readable). Now the intercept reflects the estimated (average) log risk ratio for random allocation, while the coefficients for alternate and systematic are again contrasts of these two levels compared to random allocation. Note how the coefficient for systematic allocation is the same as we obtained earlier using anova(res, X=c(0,-1,1)). Moreover, as we can see in the output, the results for the omnibus test of these two coefficients is identical to what we obtained earlier.

Model Without Intercept

Now we will fit the model, but remove the intercept, which can be done with:

res <- rma(yi, vi, mods = ~ 0 + factor(alloc), data=dat)
res

Alternatively, one could use mods = ~ factor(alloc) - 1 to remove the intercept. In either case, the output is then:

Mixed-Effects Model (k = 13; tau^2 estimator: REML)
 
tau^2 (estimated amount of residual heterogeneity):     0.3615 (SE = 0.2111)
tau (square root of estimated tau^2 value):             0.6013
I^2 (residual heterogeneity / unaccounted variability): 88.77%
H^2 (unaccounted variability / sampling variability):   8.91
 
Test for Residual Heterogeneity:
QE(df = 10) = 132.3676, p-val < .0001
 
Test of Moderators (coefficients 1:3):
QM(df = 3) = 15.9842, p-val = 0.0011
 
Model Results:
 
                         estimate      se     zval    pval    ci.lb    ci.ub
factor(alloc)alternate    -0.5180  0.4412  -1.1740  0.2404  -1.3827   0.3468
factor(alloc)random       -0.9658  0.2672  -3.6138  0.0003  -1.4896  -0.4420  ***
factor(alloc)systematic   -0.4289  0.3449  -1.2434  0.2137  -1.1050   0.2472
 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

When we remove the intercept from the model, then all dummy variables are included in the model as predictors. Hence, in this example, each coefficient then reflects the estimated (average) log risk ratio for a particular level (i.e., in the notation used earlier, $\hat{\mu}_a$, $\hat{\mu}_r$, and $\hat{\mu}_s$, respectively).2)

The results given under "Test of Moderators" are now for an omnibus test of all three model coefficients. This is a joint test of the three null hypotheses \begin{align} &H_0: \mu_a = 0 \\ &H_0: \mu_r = 0 \\ &H_0: \mu_s = 0 \end{align} which we can also write more compactly as $$ H_0: \mu_a = \mu_r = \mu_s = 0. $$ This test is clearly significant ($p = .0011$), which indicates that we can reject the null hypothesis that the (average) log risk ratio is zero for all three methods of allocation.

Again, we could use the anova() function to carry out explicitly the same test with:

anova(res, X=rbind(c(1,0,0),c(0,1,0),c(0,0,1)))
Hypotheses:
1:  factor(alloc)alternate = 0
2:     factor(alloc)random = 0
3: factor(alloc)systematic = 0
 
Results:
   estimate     se    zval   pval
1:  -0.5180 0.4412 -1.1740 0.2404
2:  -0.9658 0.2672 -3.6138 0.0003 ***
3:  -0.4289 0.3449 -1.2434 0.2137
 
Omnibus Test of Hypotheses:
QM(df = 3) = 15.9842, p-val = 0.0011

It is important to realize that this does not test whether there are differences between the different forms of allocation (this is what we tested earlier in the model that included the intercept term). However, we can again use contrasts of the model coefficients to test differences between the levels. Let's test all pairwise differences (i.e., between random and alternating allocation, between systematic and alternating allocation, and between systematic and random allocation):

anova(res, X=rbind(c(-1,1,0),c(-1,0,1), c(0,-1,1)))
Hypotheses:
1:     -factor(alloc)alternate + factor(alloc)random = 0
2: -factor(alloc)alternate + factor(alloc)systematic = 0
3:    -factor(alloc)random + factor(alloc)systematic = 0
 
Results:
   estimate     se    zval   pval
1:  -0.4478 0.5158 -0.8682 0.3853
2:   0.0890 0.5600  0.1590 0.8737
3:   0.5369 0.4364  1.2303 0.2186

These are now the exact same results we obtained earlier for the model that included the intercept term.

Note that the output does not contain an omnibus test for the three contrasts because the matrix with the contrast coefficients (L above) is not of full rank (i.e., one of the three contrasts is redundant). If we only include two of the three contrasts (again, it does not matter which two), then we also get the omnibus test (rest of the output omitted):

anova(res, X=rbind(c(-1,1,0),c(-1,0,1)))
Omnibus Test of Hypotheses:
QM(df = 2) = 1.7675, p-val = 0.4132

Parameterization

What the example above shows is that, whether we remove the intercept or not, we are essentially fitting the same model, but using a different parameterization. The Wikipedia page linked to here discusses the idea of parameterization in the context of geometry, but this is directly relevant, since fundamentally the process of fitting (meta)-regression models can also be conceptualized geometrically (involving projections and vector spaces). An excellent (but quite technical) reference for this perspective (on regression models in general) is Christensen (2011).

Models with Continuous Moderators

A rather different case arises if the model we are fitting only involves one or multiple continuous moderators. For example, consider the following model:

res <- rma(yi, vi, mods = ~ ablat, data=dat)
Mixed-Effects Model (k = 13; tau^2 estimator: REML)
 
tau^2 (estimated amount of residual heterogeneity):     0.0764 (SE = 0.0591)
tau (square root of estimated tau^2 value):             0.2763
I^2 (residual heterogeneity / unaccounted variability): 68.39%
H^2 (unaccounted variability / sampling variability):   3.16
R^2 (amount of heterogeneity accounted for):            75.62%
 
Test for Residual Heterogeneity:
QE(df = 11) = 30.7331, p-val = 0.0012
 
Test of Moderators (coefficient 2):
QM(df = 1) = 16.3571, p-val < .0001
 
Model Results:
 
         estimate      se     zval    pval    ci.lb    ci.ub
intrcpt    0.2515  0.2491   1.0095  0.3127  -0.2368   0.7397
ablat     -0.0291  0.0072  -4.0444  <.0001  -0.0432  -0.0150  ***
 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Here, the absolute latitude (ablat) of the study site was included as a continuous moderator (ranging from 13 to 55 degrees). The idea was to examine if the higher prevalence of nonpathogenic environmental mycobacteria for sites nearer the equator may provide a natural immunity against tuberculosis, leading to lower treatments effects (i.e., log risk ratios close to zero). The results above support this hypothesis: The intercept, which reflects the estimated (average) log risk ratio at the equator (with a latitude of 0), is not significantly different from 0, while the coefficient/slope for ablat suggests that the estimated (average) log risk ratio becomes increasingly more negative for sites further away from the equator (recall that negative log risk ratios here reflect that the vaccine is reducing the risk of an infection and hence is working).3)

Now what happens if we remove the intercept?

res <- rma(yi, vi, mods = ~ 0 + ablat, data=dat)
Mixed-Effects Model (k = 13; tau^2 estimator: REML)
 
tau^2 (estimated amount of residual heterogeneity):     0.0865 (SE = 0.0592)
tau (square root of estimated tau^2 value):             0.2940
I^2 (residual heterogeneity / unaccounted variability): 78.35%
H^2 (unaccounted variability / sampling variability):   4.62
 
Test for Residual Heterogeneity:
QE(df = 12) = 48.7021, p-val < .0001
 
Model Results:
 
       estimate      se     zval    pval    ci.lb    ci.ub
ablat   -0.0226  0.0032  -7.0545  <.0001  -0.0289  -0.0163  ***
 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

When the model only includes continuous (i.e., numeric) predictors/moderators, then removing the intercept does just that: it removes the intercept. Hence, the model above forces the intercept to be 0, that is, it assumes that at the equator, the (average) log risk ratio is exactly zero. This seems like a rather strong assumption to make, so I would not recommend doing so. In fact, only in rare cases is it ever appropriate to remove the intercept term from a regression model (that involves only continuous predictors/moderators).

References

Colditz, G. A., Brewer, T. F., Berkey, C. S., Wilson, M. E., Burdick, E., Fineberg, H. V., et al. (1994). Efficacy of BCG vaccine in the prevention of tuberculosis: Meta-analysis of the published literature. Journal of the American Medical Association, 271(9), 698–702.

Christensen, R. (2011). Plane answers to complex questions: The theory of linear models. New York: Springer.

1)
Since alloc is a character variable, the coercion to a factor via the factor() function isn't strictly necessary (since this would have happened automatically anyway), but it's better to be explicit. In some cases, a numeric variable may in fact be used to represent levels of a categorical variable, in which case use of the factor() function is crucial.
2)
Note how the coefficients for alternate and random are exactly identical to the intercepts in the two models fitted earlier where each of these levels was chosen as the reference level.
3)
We should interpret the intercept only very cautiously, since it is an extrapolation beyond the range of values observed for absolute latitude (i.e., no study was conducted exactly at the equator).
tips/models_with_or_without_intercept.txt · Last modified: 2024/06/18 19:29 by Wolfgang Viechtbauer