Despite the many successes of modern neural network toolkits like TensorFlow, one of the advantages of classical methods like linear mixed models is that they can have different levels of regularization for different subsets of variables. For example, a customer-level factor with thousands of levels would likely benefit from more regularization than a US state-level factor, and linear mixed models estimate those levels of regularization from the data. In a neural networks context, learning multiple penalties using validation sets would be “very expensive to do,” according to Geoff Hinton, co-inventor of back propagation and professor of Neural Networks for Machine Learning, Coursera course from a few years ago.
Professor Hinton’s statement comes from Lecture 9f where he introduces MacKay’s “quick and dirty” method for using empirical Bayes to bypass the validation set in neural network training. The slide from the course describing the method is shown below:
In this article, were going to implement MacKay’s method in TensorFlow 2.0, but considering theory for a moment, the law of total variance gives us a reason for concern. It’s the impetus for this Cross Validated question on why the variance of the predicted random effects fromR’s lme4 isn’t the same as the estimated random effects variance matrix. Though it feels like you’re seeing the actual random effects in lmer’s output, you’re actually seeing the predicted value of the random effect given the response, i.e., for subject-specific random effect and data vector .
From the Law of Total Variance,
which means that if we follow MacKay’s recipe for estimating , we’re going to come up short in estimating the total variance of the weights. Since our goal is effective regularization rather than weight estimation, the question is whether this is good enough.
Using lme4 on the sleepstudy data
Consider the sleepstudy example featured in R’s lme4 package:
library(lme4) lme1 <- lmer(Reaction ~ 1 + Days + (1 + Days | Subject), sleepstudy) summary(lme1) head(ranef(fm1)[])
Random effects: Groups Name Variance Std.Dev. Corr Subject (Intercept) 612.09 24.740 Days 35.07 5.922 0.07 Residual 654.94 25.592 Number of obs: 180, groups: Subject, 18 Fixed effects: Estimate Std. Error t value (Intercept) 251.405 6.825 36.838 Days 10.467 1.546 6.771 > head(ranef(fm1)[]) (Intercept) Days 308 2.258565 9.1989719 309 -40.398577 -8.6197032 310 -38.960246 -5.4488799 330 23.690498 -4.8143313 331 22.260203 -3.0698946 332 9.039526 -0.2721707
MacKay’s method on sleepstudy
The SleepReg class
The following examples will use the SleepReg class, an ad hoc subclass of tensorflow.Module specifically for implementing maximum likelihood (also GLS) estimation / prediction of fixed and random effects given variances for random effects and model errors. For an explanation of the TensorFlow 2.0 strategy and why inheriting from tf.Module is so important, refer to Multiple Regression in TensorFlow 2.0 using Matrix Notation.
The SleepReg class incorporates a (profiled) maximum likelihood loss of the form:
with tf.GradientTape() as gradient_tape: y_pred = self._get_expectation(X, Z, self.beta, self.b) loss = (self._get_sse(y, y_pred) / self.sigmasq_epsilon + self._get_neg_log_prior(self.b, V))
This involves the sum of squared errors divided by the error variance plus the likelihood contribution of the latent random effects in
_get_neg_log_prior (referred to as a “prior” to reflect the empirical Bayes interpretation). The latter quantity is a weighted sum of squares of the random effects, where the weight matrix V is a block diagonal of the inverse random effects variance matrices.
@tf.function def _get_neg_log_prior(self, b, V): """Get the weight pentalty from the full Gaussian distribution""" bTV = tf.matmul(tf.transpose(b), V) bTVb = tf.matmul(bTV, b) return tf.squeeze(bTVb)
Reproducing lmer’s estimates in TensorFlow
The following shows TensorFlow 2.0 code capable of reproducing both the random effect predictions and fixed effect estimates of
lmer, but without the routines to estimate the unknown variances such as REML. You’ll see that the optimization routine matches lmer’s output (to a high degree of accurracy) for both fixed effects estimates and random effects predictions.
from sleepstudy import SleepReg import numpy as np sleep_reg = SleepReg("/mnt/c/devl/data/sleepstudy.csv") # Replicate lme4's result off_diag = 24.7404 * 5.9221 * 0.066 lmer_vcov = np.array([[24.7404 ** 2, off_diag], [off_diag, 5.9221 ** 2]]) sleep_reg.reset_variances(lmer_vcov, 25.5918 ** 2) sleep_reg.train() sleep_reg.set_optimizer(adam=True) sleep_reg.train(epochs=300) print(sleep_reg.beta) print(sleep_reg.get_random_effects().head())
<tf.Variable 'Variable:0' shape=(2, 1) dtype=float64, numpy= array([[251.40510486], [ 10.46728596]])> mu b 0 2.262934 9.198305 1 -40.399556 -8.619793 2 -25.207478 1.172853 3 -13.065620 6.613451 4 4.575970 -3.014939
Implementing MacKay’s method
The loss function component
_get_neg_log_prior in SleepReg uses a block diagonal matrix, V, which is non-diagonal if there are correlations between the random effects. MacKay’s proposed method uses the raw sum of squares of the weights, making for a very clean equation:
While we go through MacKay’s “while not yet bored” loop, we’ll zero out the non-diagonals of V that result from non-zero covariances in the empirical variance matrix of the random effect predictions. What happens if you don’t? I thought it would lead to a slightly less “quick and dirty” version of the algorithm, but the procedure actually bombs after a few iterations. You can see this yourself by commenting out the line with the
diag function calls.
sleep_reg.zero_coefficients() sleep_reg.reset_variances(np.array([[410, 10], [10, 22]]), .25 * np.var(sleep_reg.y)) sleep_reg.set_optimizer(adam=False) for i in range(100): sleep_reg.train(display_beta=False) sigmasq_epsilon = sleep_reg.estimate_sigmasq_epsilon() V = sleep_reg.get_rnd_effs_variance() V_diag = np.diag(np.diag(V)) # comment out and watch procedure fail sleep_reg.reset_variances(V_diag, sigmasq_epsilon) print(V_diag) print(sigmasq_epsilon) print(sleep_reg.beta) print(sleep_reg.get_random_effects().head())
--- last V_diag [[302.9045408 0. ] [ 0. 31.08902388]] --- last sigmasq_epsilon [670.8546961] --- final estimate of fixed effect beta <tf.Variable 'Variable:0' shape=(2, 1) dtype=float64, numpy= array([[251.40510485], [ 10.46728596]])> --- final random effects predictions mu b 0 2.013963 9.147986 1 -32.683526 -9.633964 2 -20.255296 0.459532 3 -10.372529 6.169903 4 3.618080 -2.851007
As foretold by the law of total variances, the random effects variance estimates from MacKay’s method are low, the variance of the random intercepts coming in at right under half of lmer’s estimated variance of 612 at 303. Whether or not it’s a coincidence, the empirically estimated variance of the random slopes was 31, much closer to the lmer estimated value of 35. The poorer random effect predictions led to a slightly larger error variance of 671 vs lmer’s 655, but still relatively close.
Even with the inadequacies in variance estimation, the fixed effects estimates produced by the MacKay method are much closer to lmer’s than to an OLS regression treating subjects as fixed factor levels. The random effect predictions themselves are shrunken down too much but are also quite close for some subjects. The procedure, true to its name, is quick and dirty, but it clearly has some value. I’m curious whether there’s a data-driven to scale up the the empirical weight variances; that also gets into the inherent uncertainty in the weight estimation.
That the procedure breaks down from even a slight deviation from an independent random coefficients model is a mystery to me.
I have a vision of a toolkit with the power of TensorFlow but with the utility of empirical Bayes for estimating hyperparameters. Parts of that vision were explored in this article. Whether or not MacKay’s method will find it into my standard modeling toolkit is yet to be seen, but my curiosity regarding the method is only enhanced by the experiments done here.