## Package News for 2021

### 2021-06-09: Version 3.0 Released on CRAN

A new version of the metafor package (version 3.0) has been published on CRAN. This version includes a lot of updates that have accumulated in the development version of the package over the past 14-15 months. Some highlights:

- The documentation has been further improved. I now make use of the mathjaxr package to nicely render equations in the HTML help pages (and in order to do this, I had to create the mathjaxr package in the first place!).
`selmodel()`

was added for fitting a wide variety of selection models, including the beta selection model by Citkowicz and Vevea (2017), various models described by Preston et al. (2004), and step function models (with the three-parameter selection model (3PSM) as a special case).- As another technique related to publication/small-sample bias, the
`tes()`

function was added to carry out the test of 'excess significance' (Ioannidis & Trikalinos, 2007; see also Francis, 2013). - The
`regtest()`

function now shows the 'limit estimate' of the (average) true effect/outcome. This is in essence what the PET/PEESE methods do (when the standard errors / sampling variances are used as predictors in a meta-regression model). - One can now also fit so-called 'location-scale models' via the
`rma()`

function (using the`scale`

argument). With this, one can specify predictors for the amount of heterogeneity in the outcomes (to examine if the outcomes are more/less heterogeneous under certain circumstances). - The
`regplot()`

function can be used to draw bubble plots based on meta-regression models. For models involving multiple predictors, the function draws the line for the 'marginal relationship' of a predictor. Confidence/prediction interval bands can also be shown. - Sometimes, it might be necessary to aggregate a meta-analytic dataset with multiple outcomes from the same study to the study level. An
`aggregate()`

method for`escalc`

objects was added that can do this, while (approximately) accounting for various types of dependencies. - When using functions that allow for parallel processing, progress bars can now also be shown, thanks to the pbapply package. Gives you an idea whether to just grab a coffee or go out for lunch while your computer is chugging along.
- 24 new datasets were added (there are now over 60 datasets included in the package). These datasets also cover advanced methodology, such as multivariate/multilevel models, network meta-analysis, phylogenetic meta-analysis, and models with a spatial correlation structure.

Lots of smaller tweaks/improvements were also made. I feel like so much has accumulated that this warranted a version jump to version 3.0.

### 2021-04-21: Better Degrees of Freedom Calculation

In random/mixed-effects models as can be fitted with the `rma()`

function, tests and confidence intervals for the model coefficients are by default constructed based on a standard normal distribution. In general, it is better to use the Knapp-Hartung method for this purpose, which does two things: (1) the standard errors of the model coefficients are estimated in a slightly different way and (2) a t-distribution is used with $k-p$ degrees of freedom (where $k$ is the total number of estimates and $p$ the number of coefficients in the model). When conducting a simultaneous (or 'omnibus') test of multiple coefficients, then an F-distribution with $m$ and $k-p$ degrees of freedom is used (for the 'numerator' and 'denominator' degrees of freedom, respectively), with $m$ denoting the number of coefficients tested. To use this method, set argument `test="knha"`

.

The Knapp-Hartung method cannot be directly generalized to more complex models as can be fitted with the `rma.mv()`

function, although we can still use t- and F-distributions for conducting tests of one or multiple model coefficients in the context of such models. This is possible by setting `test="t"`

. However, this then raises the question how the (denominator) degrees of freedom for such tests should be calculated. By default, the degrees of freedom are calculated as described above. However, this method does not reflect the complexities of models that are typically fitted with the `rma.mv()`

function. For example, in multilevel models (with multiple estimates nested within studies), a predictor (or 'moderator') may be measured at the study level (i.e., it is constant for all estimates belonging to the same study) or at the level of the individual estimates (i.e., it might vary within studies). By setting argument `dfs="contain"`

, a method is used for calculating the degrees of freedom that tends to provide tests with better control of the Type I error rate and confidence intervals with closer to nominal coverage rates. See the documentation of the function for further details.