How much do Covariates Matter?

Motivation

In regression analyses, we often wonder about “how much covariates matter?” for explaining the relationship between a target variable \(D\) and an outcome variable \(Y\).

For example, we might start analysing the gender wage gap with a simple regression model as log(wage) on gender. But arguably, men and women differ in many socio-economic characteristics: they might have different (average) levels of education or career experience, and they might work in different industries and select into different higher- or lower-paying industries. So which fraction of the gender wage gap can be explained by these observable characteristics?

In this notebook, we will compute and decompose the gender wage gab based on a subset of the PSID data set using a method commonly known as the “Gelbach Decomposition” (Gelbach, JoLE 2016).

We start with loading a subset of the PSID data provided by the AER R package.

import re

import pandas as pd

import pyfixest as pf

psid = pd.read_csv(
    "https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/refs/heads/master/csv/AER/PSID7682.csv"
)
psid["experience"] = pd.Categorical(psid["experience"])
psid["year"] = pd.Categorical(psid["year"])
psid.head()
rownames experience weeks occupation industry south smsa married gender union education ethnicity wage year id
0 1 3 32 white no yes no yes male no 9 other 260 1976 1
1 2 4 43 white no yes no yes male no 9 other 305 1977 1
2 3 5 40 white no yes no yes male no 9 other 402 1978 1
3 4 6 39 white no yes no yes male no 9 other 402 1979 1
4 5 7 42 white yes yes no yes male no 9 other 429 1980 1

Computing a first correlation between gender and wage, we find that males earn on average 0.474 log points more than women.

fit_base = pf.feols("log(wage) ~ gender", data=psid, vcov="hetero")
fit_base.summary()
###

Estimation:  OLS
Dep. var.: log(wage), Fixed effects: 0
Inference:  hetero
Observations:  4165

| Coefficient    |   Estimate |   Std. Error |   t value |   Pr(>|t|) |   2.5% |   97.5% |
|:---------------|-----------:|-------------:|----------:|-----------:|-------:|--------:|
| Intercept      |      6.255 |        0.020 |   320.714 |      0.000 |  6.217 |   6.294 |
| gender[T.male] |      0.474 |        0.021 |    22.818 |      0.000 |  0.434 |   0.515 |
---
RMSE: 0.436 R2: 0.106 

To examine the impact of observable on the relationship between wage and gender, a common strategy in applied research is to incrementally add a set of covariates to the baseline regression model of log(wage) on gender. Here, we will incrementally add the following covariates:

  • education,
  • experience
  • occupation,
  • industry
  • year
  • ethnicity

We can do so by using multiple estimation syntax:

fit_stepwise1 = pf.feols(
    "log(wage) ~ gender + csw0(education, experience, occupation, industry, year, ethnicity)",
    data=psid,
)
pf.etable(fit_stepwise1)
log(wage)
(1) (2) (3) (4) (5) (6) (7)
coef
gender[T.male] 0.474***
(0.021)
0.474***
(0.019)
0.425***
(0.018)
0.444***
(0.018)
0.428***
(0.018)
0.432***
(0.016)
0.410***
(0.017)
education 0.065***
(0.002)
0.075***
(0.002)
0.061***
(0.003)
0.063***
(0.003)
0.060***
(0.002)
0.059***
(0.002)
experience[T.2] 0.147
(0.156)
0.156
(0.155)
0.158
(0.154)
0.124
(0.137)
0.128
(0.136)
experience[T.3] 0.273
(0.139)
0.284*
(0.138)
0.278*
(0.138)
0.232
(0.122)
0.241*
(0.122)
experience[T.4] 0.377**
(0.137)
0.389**
(0.136)
0.385**
(0.135)
0.287*
(0.121)
0.298*
(0.120)
experience[T.5] 0.452***
(0.135)
0.461***
(0.134)
0.456***
(0.133)
0.311**
(0.119)
0.323**
(0.118)
experience[T.6] 0.483***
(0.134)
0.495***
(0.133)
0.491***
(0.132)
0.300*
(0.118)
0.312**
(0.118)
experience[T.7] 0.591***
(0.133)
0.604***
(0.132)
0.599***
(0.132)
0.374**
(0.118)
0.386***
(0.117)
experience[T.8] 0.624***
(0.133)
0.641***
(0.132)
0.637***
(0.131)
0.387***
(0.117)
0.400***
(0.117)
experience[T.9] 0.667***
(0.133)
0.683***
(0.132)
0.679***
(0.131)
0.386**
(0.117)
0.399***
(0.117)
experience[T.10] 0.632***
(0.133)
0.646***
(0.132)
0.641***
(0.131)
0.380**
(0.117)
0.394***
(0.117)
experience[T.11] 0.673***
(0.133)
0.688***
(0.132)
0.681***
(0.131)
0.400***
(0.117)
0.415***
(0.117)
experience[T.12] 0.689***
(0.133)
0.703***
(0.132)
0.697***
(0.131)
0.412***
(0.117)
0.428***
(0.117)
experience[T.13] 0.750***
(0.133)
0.760***
(0.132)
0.753***
(0.131)
0.453***
(0.118)
0.470***
(0.117)
experience[T.14] 0.744***
(0.133)
0.753***
(0.132)
0.745***
(0.132)
0.452***
(0.118)
0.467***
(0.117)
experience[T.15] 0.770***
(0.133)
0.780***
(0.132)
0.771***
(0.132)
0.483***
(0.118)
0.497***
(0.117)
experience[T.16] 0.808***
(0.134)
0.818***
(0.132)
0.809***
(0.132)
0.498***
(0.118)
0.514***
(0.117)
experience[T.17] 0.803***
(0.134)
0.807***
(0.133)
0.799***
(0.133)
0.500***
(0.119)
0.517***
(0.118)
experience[T.18] 0.829***
(0.134)
0.833***
(0.133)
0.825***
(0.133)
0.516***
(0.119)
0.530***
(0.118)
experience[T.19] 0.866***
(0.135)
0.872***
(0.134)
0.860***
(0.133)
0.550***
(0.119)
0.563***
(0.119)
experience[T.20] 0.820***
(0.135)
0.826***
(0.134)
0.813***
(0.134)
0.514***
(0.119)
0.530***
(0.119)
experience[T.21] 0.828***
(0.135)
0.836***
(0.134)
0.828***
(0.134)
0.545***
(0.120)
0.564***
(0.119)
experience[T.22] 0.834***
(0.136)
0.836***
(0.135)
0.827***
(0.134)
0.556***
(0.120)
0.574***
(0.119)
experience[T.23] 0.823***
(0.136)
0.825***
(0.135)
0.815***
(0.134)
0.549***
(0.120)
0.567***
(0.119)
experience[T.24] 0.834***
(0.136)
0.839***
(0.135)
0.828***
(0.134)
0.552***
(0.120)
0.569***
(0.119)
experience[T.25] 0.876***
(0.135)
0.884***
(0.134)
0.874***
(0.134)
0.616***
(0.120)
0.633***
(0.119)
experience[T.26] 0.898***
(0.135)
0.911***
(0.134)
0.899***
(0.134)
0.620***
(0.120)
0.636***
(0.119)
experience[T.27] 0.901***
(0.135)
0.909***
(0.134)
0.897***
(0.134)
0.631***
(0.120)
0.644***
(0.119)
experience[T.28] 0.867***
(0.135)
0.874***
(0.134)
0.860***
(0.134)
0.601***
(0.120)
0.613***
(0.119)
experience[T.29] 0.897***
(0.135)
0.902***
(0.134)
0.888***
(0.134)
0.615***
(0.120)
0.629***
(0.119)
experience[T.30] 0.878***
(0.135)
0.886***
(0.134)
0.871***
(0.133)
0.609***
(0.119)
0.622***
(0.119)
experience[T.31] 0.928***
(0.135)
0.926***
(0.134)
0.911***
(0.133)
0.644***
(0.119)
0.658***
(0.118)
experience[T.32] 0.936***
(0.135)
0.935***
(0.134)
0.921***
(0.134)
0.648***
(0.120)
0.660***
(0.119)
experience[T.33] 0.970***
(0.135)
0.962***
(0.134)
0.947***
(0.134)
0.642***
(0.120)
0.655***
(0.119)
experience[T.34] 0.942***
(0.136)
0.938***
(0.134)
0.926***
(0.134)
0.629***
(0.120)
0.643***
(0.119)
experience[T.35] 0.960***
(0.136)
0.953***
(0.135)
0.941***
(0.134)
0.642***
(0.120)
0.657***
(0.119)
experience[T.36] 1.004***
(0.136)
1.003***
(0.135)
0.991***
(0.135)
0.663***
(0.120)
0.678***
(0.120)
experience[T.37] 0.951***
(0.137)
0.947***
(0.136)
0.937***
(0.136)
0.623***
(0.121)
0.639***
(0.121)
experience[T.38] 0.895***
(0.139)
0.900***
(0.138)
0.892***
(0.137)
0.574***
(0.123)
0.591***
(0.122)
experience[T.39] 0.884***
(0.140)
0.892***
(0.139)
0.880***
(0.138)
0.530***
(0.124)
0.552***
(0.123)
experience[T.40] 0.815***
(0.140)
0.825***
(0.139)
0.812***
(0.139)
0.461***
(0.124)
0.483***
(0.123)
experience[T.41] 0.775***
(0.144)
0.783***
(0.143)
0.761***
(0.142)
0.397**
(0.127)
0.421***
(0.127)
experience[T.42] 0.827***
(0.149)
0.841***
(0.148)
0.817***
(0.148)
0.429**
(0.132)
0.455***
(0.131)
experience[T.43] 0.844***
(0.155)
0.871***
(0.153)
0.850***
(0.153)
0.424**
(0.137)
0.451***
(0.136)
experience[T.44] 0.756***
(0.165)
0.781***
(0.164)
0.758***
(0.163)
0.394**
(0.146)
0.427**
(0.145)
experience[T.45] 0.733***
(0.171)
0.750***
(0.169)
0.739***
(0.169)
0.383*
(0.151)
0.407**
(0.150)
experience[T.46] 0.751***
(0.178)
0.768***
(0.177)
0.766***
(0.176)
0.358*
(0.158)
0.373*
(0.157)
experience[T.47] 0.435
(0.249)
0.406
(0.247)
0.427
(0.246)
0.159
(0.219)
0.197
(0.218)
experience[T.48] 0.675**
(0.249)
0.647**
(0.247)
0.668**
(0.246)
0.317
(0.219)
0.355
(0.218)
experience[T.49] 1.007***
(0.249)
0.979***
(0.247)
1.000***
(0.246)
0.575**
(0.219)
0.614**
(0.218)
experience[T.50] 0.920***
(0.249)
0.892***
(0.247)
0.913***
(0.246)
0.408
(0.219)
0.448*
(0.218)
experience[T.51] 0.390
(0.389)
0.387
(0.386)
0.417
(0.385)
-0.121
(0.342)
-0.120
(0.341)
occupation[T.white] 0.125***
(0.015)
0.132***
(0.015)
0.127***
(0.013)
0.123***
(0.013)
industry[T.yes] 0.064***
(0.012)
0.070***
(0.011)
0.066***
(0.011)
year[T.1977] 0.073***
(0.019)
0.072***
(0.019)
year[T.1978] 0.193***
(0.019)
0.192***
(0.019)
year[T.1979] 0.284***
(0.019)
0.282***
(0.019)
year[T.1980] 0.363***
(0.019)
0.361***
(0.019)
year[T.1981] 0.434***
(0.019)
0.432***
(0.019)
year[T.1982] 0.518***
(0.019)
0.516***
(0.019)
ethnicity[T.other] 0.133***
(0.020)
Intercept 6.255***
(0.020)
5.419***
(0.034)
4.566***
(0.134)
4.664***
(0.133)
4.636***
(0.133)
4.672***
(0.118)
4.570***
(0.118)
stats
Observations 4165 4165 4165 4165 4165 4165 4165
S.E. type iid iid iid iid iid iid iid
R2 0.106 0.260 0.376 0.387 0.391 0.520 0.525
Significance levels: * p < 0.05, ** p < 0.01, *** p < 0.001. Format of coefficient cell: Coefficient (Std. Error)

Because the table is so long that it’s hard to see anything, we restrict it to display only a few variables:

pf.etable(fit_stepwise1, keep=["gender", "ethnicity", "education"])
log(wage)
(1) (2) (3) (4) (5) (6) (7)
coef
gender[T.male] 0.474***
(0.021)
0.474***
(0.019)
0.425***
(0.018)
0.444***
(0.018)
0.428***
(0.018)
0.432***
(0.016)
0.410***
(0.017)
ethnicity[T.other] 0.133***
(0.020)
education 0.065***
(0.002)
0.075***
(0.002)
0.061***
(0.003)
0.063***
(0.003)
0.060***
(0.002)
0.059***
(0.002)
stats
Observations 4165 4165 4165 4165 4165 4165 4165
S.E. type iid iid iid iid iid iid iid
R2 0.106 0.260 0.376 0.387 0.391 0.520 0.525
Significance levels: * p < 0.05, ** p < 0.01, *** p < 0.001. Format of coefficient cell: Coefficient (Std. Error)

We see that the coefficient on gender is roughly the same in all models. Tentatively, we might already conclude that the observable characteristics in the data do not explain a large part of the gender wage gap.

But how much do differences in education matter? We have computed 6 additional models that contain education as a covariate. The obtained point estimates vary between \(0.059\) and \(0.075\). Which of these numbers should we report?

Additionally, note that while we have only computed 6 additional models with covariates, the number of possible models is much larger. If I did the math correctly, simply by additively and incrementally adding covariates, we could have computed \(57\) different models (not all of which would have included education as a control).

As it turns out, different models will lead to different point estimates. The order of incrementally adding covariates might impact our conclusion. To illustrate this, we keep the same ordering as before, but start with ethnicity as our first variable:

fit_stepwise2 = pf.feols(
    "log(wage) ~ gender + csw0(ethnicity, education, experience, occupation, industry, year)",
    data=psid,
)
pf.etable(fit_stepwise2, keep=["gender", "ethnicity", "education"])
log(wage)
(1) (2) (3) (4) (5) (6) (7)
coef
gender[T.male] 0.474***
(0.021)
0.436***
(0.022)
0.450***
(0.020)
0.399***
(0.019)
0.418***
(0.019)
0.404***
(0.019)
0.410***
(0.017)
ethnicity[T.other] 0.227***
(0.026)
0.141***
(0.024)
0.158***
(0.023)
0.151***
(0.022)
0.146***
(0.022)
0.133***
(0.020)
education 0.064***
(0.002)
0.074***
(0.002)
0.060***
(0.003)
0.062***
(0.003)
0.059***
(0.002)
stats
Observations 4165 4165 4165 4165 4165 4165 4165
S.E. type iid iid iid iid iid iid iid
R2 0.106 0.121 0.266 0.383 0.394 0.397 0.525
Significance levels: * p < 0.05, ** p < 0.01, *** p < 0.001. Format of coefficient cell: Coefficient (Std. Error)

We obtain 5 new coefficients on education that vary between 0.074 and 0.059.

So, which share of the “raw” gender wage gap can be attributed to differences in education between men and women? Should we report a statisics based on the 0.075 estimate? Or on the 0.059 estimate? Which value should we pick?

To help us with this problem, Gelbach (2016, JoLE) develops a decomposition procedure building on the omitted variable bias formula that produces a single value for the contribution of a given covariate, say education, to the gender wage gap.

Notation and Gelbach’s Algorithm

Before we dive into a code example, let us first introduce the notation and Gelbach’s algorithm. We are interested in “decomposing” the effect of a variable \(X_{1} \in \mathbb{R}\) on an outcome \(Y \in \mathbb{R}\) into a part explained by covariates \(X_{2} \in \mathbb{R}^{k_{2}}\) and an unexplained part.

Thus we can specify two regression models:

  • The short model \[ Y = X_{1} \beta_{1} + u_{1} \]

  • the long (or full) model

    \[ Y = X_{1} \beta_{1} + X_{2} \beta_{2} + e \]

By fitting the short regression, we obtain an estimate \(\hat{\beta}_{1}\), which we will denote as the direct effect, and by estimating the long regression, we obtain an estimate of the regression coefficients \(\hat{\beta}_{2} \in \mathbb{R}^{k_2}\). We will denote the estimate on \(X_1\) in the long regression as the full effect.

We can then compute the contribution of an individual covariate \(\hat{\delta}_{k}\) via the following algorithm:

  • Step 1: we compute coefficients from \(k_{2}\) auxiliary regression models \(\hat{\Gamma}\) as \[ \hat{\Gamma} = (X_{1}'X_{1})^{-1} X_{1}'X_{2} \]

    In words, we regress the target variable \(X_{1}\) on each covariate in \(X_{2}\). In practice, we can easily do this in one line of code via scipy.linalg.lstsq().

  • Step 2: We can compute the total effect explained by the covariates, which we denote by \(\delta\), as

    \[ \hat{\delta} = \sum_{k=1}^{k_2} \hat{\Gamma}_{k} \hat{\beta}_{2,k} \]

    where \(\hat{\Gamma}_{k}\) are the coefficients from an auxiliary regression \(X_1\) on covariate \(X_{2,k}\) and \(\hat{\beta}_{2,k}\) is the associated estimate on \(X_{2,k}\) from the full model.

    The individual contribution of covariate \(k\) is then defined as

    \[ \hat{\delta}_{k} = \hat{\Gamma}_{k} \hat{\beta}_{2,k}. \]

After having obtained \(\delta_{k}\) for each auxiliary variable \(k\), we can easily aggregate multiple variables into a single groups of interest. For example, if \(X_{2}\) contains a set of dummies from industry fixed effects, we could compute the explained part of “industry” by summing over all the dummies:

\[ \hat{\delta}_{\textit{industry}} = \sum_{k \in \textit{industry dummies}} \hat{\Gamma}_{k} \hat{\beta}_{2,k} \]

PyFixest Example

To employ Gelbach’s decomposition in pyfixest, we start with the full regression model that contains all variables of interest:

fit_full = pf.feols(
    "log(wage) ~ gender + ethnicity + education + experience + occupation + industry +year",
    data=psid,
)

After fitting the full model, we can run the decomposition procedure by calling the decompose() method. The only required argument is to specify the target parameter, which in this case is “gender”. Inference is conducted via a non-parametric bootstrap and can optionally be turned off.

gb = fit_full.decompose(param="gender[T.male]", digits=5)
100%|██████████| 1000/1000 [00:13<00:00, 72.16it/s]

As before, this produces a pretty big output table that reports - the direct effect of the regression of log(wage) ~ gender - the full effect of gender on log wage using the full regression with all control variables - the explained effect as the difference between the full and direct effect - a single scalar value for the individual contributions of a covariate to overall explained effect

For our example at hand, the additional covariates only explain a tiny fraction of the differences in log wages between men and women - 0.064 points. Of these, around one third can be attributed to ethnicity, 0.00064 to years of eduaction, etc.

pf.make_table(gb)
direct_effect full_effect explained_effect
gender[T.male] 0.47447 0.41034 0.06413
[0.45957, 0.50398] [0.39909, 0.43440] [0.04599, 0.10157]
ethnicity[T.other] 0.02275
[0.01928, 0.02913]
education 0.00064
[-0.00669, 0.00551]
experience[T.2] -0.00030
[-0.00038, 0.00020]
experience[T.3] -0.00234
[-0.00545, 0.00068]
experience[T.4] -0.00351
[-0.00664, -0.00137]
experience[T.5] -0.00307
[-0.00418, -0.00210]
experience[T.6] -0.00103
[-0.00388, 0.00065]
experience[T.7] 0.00000
[0.00023, 0.00663]
experience[T.8] 0.00184
[0.00048, 0.00772]
experience[T.9] -0.00338
[-0.00394, 0.00290]
experience[T.10] -0.00438
[-0.00822, -0.00434]
experience[T.11] -0.00185
[-0.01007, 0.00476]
experience[T.12] -0.00146
[-0.00367, 0.00006]
experience[T.13] -0.00588
[-0.00753, 0.00256]
experience[T.14] -0.00961
[-0.01220, -0.00514]
experience[T.15] -0.00878
[-0.01349, 0.00014]
experience[T.16] -0.00551
[-0.01104, 0.00113]
experience[T.17] -0.00020
[-0.00239, 0.00118]
experience[T.18] -0.00220
[-0.00425, -0.00007]
experience[T.19] -0.00551
[-0.00811, -0.00134]
experience[T.20] -0.00789
[-0.01951, -0.00011]
experience[T.21] -0.00358
[-0.01253, -0.00033]
experience[T.22] -0.00611
[-0.01149, -0.00335]
experience[T.23] -0.00225
[-0.00732, 0.00394]
experience[T.24] -0.00347
[-0.00518, 0.00302]
experience[T.25] -0.00132
[-0.00329, 0.00276]
experience[T.26] 0.00190
[0.00097, 0.01047]
experience[T.27] 0.00985
[0.00718, 0.01245]
experience[T.28] 0.00936
[0.00539, 0.00755]
experience[T.29] 0.01263
[0.00961, 0.01783]
experience[T.30] 0.01217
[0.00692, 0.01191]
experience[T.31] 0.01377
[0.00599, 0.01641]
experience[T.32] 0.01221
[0.00656, 0.01189]
experience[T.33] 0.01177
[0.01226, 0.01842]
experience[T.34] 0.01102
[0.00619, 0.01278]
experience[T.35] 0.00898
[0.00555, 0.01469]
experience[T.36] 0.00672
[0.00553, 0.00988]
experience[T.37] 0.00596
[0.00142, 0.01117]
experience[T.38] 0.00627
[0.00415, 0.00908]
experience[T.39] 0.00349
[-0.00051, 0.00313]
experience[T.40] 0.00163
[-0.00053, 0.00187]
experience[T.41] -0.00006
[-0.00139, 0.00112]
experience[T.42] -0.00020
[-0.00091, 0.00032]
experience[T.43] 0.00015
[-0.00114, 0.00126]
experience[T.44] -0.00158
[-0.00234, -0.00051]
experience[T.45] -0.00172
[-0.00460, -0.00008]
experience[T.46] -0.00088
[-0.00109, -0.00000]
experience[T.47] -0.00031
[-0.00005, 0.00129]
experience[T.48] -0.00057
[0.00004, 0.00093]
experience[T.49] -0.00098
[-0.00261, 0.00011]
experience[T.50] -0.00071
[-0.00152, 0.00024]
experience[T.51] -0.00003
[-0.00026, 0.00000]
occupation[T.white] -0.01649
[-0.01603, -0.01136]
industry[T.yes] 0.01818
[0.01259, 0.02040]
year[T.1977] -0.00000
[-0.00100, 0.00019]
year[T.1978] 0.00000
[-0.00120, 0.00534]
year[T.1979] 0.00000
[-0.00835, -0.00024]
year[T.1980] 0.00000
[-0.00264, 0.00760]
year[T.1981] 0.00000
[-0.00339, 0.00949]
year[T.1982] 0.00000
[-0.00999, 0.01238]

Because experience is a categorical variable, the table gets pretty unhandy: we produce one estimate for “each” level. Luckily, Gelbach’s decomposition allows us to group individual contributions into a single number. In the decompose() method, we can combine variables via the combine_covariates argument:

gb2 = fit_full.decompose(
    param="gender[T.male]",
    combine_covariates={
        "experience": re.compile("experience"),
        "occupation": re.compile("occupation"),
        "industry": re.compile("industry"),
        "year": re.compile("year"),
        "ethnicity": re.compile("ethnicity"),
    },
)
100%|██████████| 1000/1000 [00:07<00:00, 128.06it/s]

We now report a single value for “experience”, which explains a good chunk - around half - of the explained part of the gender wage gap.

pf.make_table(gb2)
direct_effect full_effect explained_effect
gender[T.male] 0.4745 0.4103 0.0635
[0.4558, 0.4775] [0.3968, 0.4222] [0.0512, 0.0784]
experience 0.0390
[0.0378, 0.0421]
occupation -0.0165
[-0.0277, -0.0166]
industry 0.0182
[0.0170, 0.0247]
year 0.0000
[-0.0013, 0.0141]
ethnicity 0.0227
[0.0170, 0.0200]

We can aggregate even more to “individual level” and “job” level variables:

gb3 = fit_full.decompose(
    param="gender[T.male]",
    combine_covariates={
        "job": re.compile(r".*(occupation|industry).*"),
        "personal": re.compile(r".*(education|experience|ethnicity).*"),
        "year": re.compile("year"),
    },
)
100%|██████████| 1000/1000 [00:08<00:00, 116.51it/s]
pf.make_table(gb3)
direct_effect full_effect explained_effect
gender[T.male] 0.4745 0.4103 0.0641
[0.4511, 0.4826] [0.3974, 0.4057] [0.0536, 0.0852]
job 0.0017
[-0.0060, 0.0044]
personal 0.0624
[0.0504, 0.0775]
year 0.0000
[-0.0101, 0.0092]

Literature