While the two were always friendly, TensorFlow has fully embraced the Keras API in version 2.0, making it THE high-level API and further tightening its integration into the platform’s core. From tutorials, videos, press releases, the message is resounding: *use the Keras API, unless you absolutely, positively, can’t*.

All signs point to the Keras API as being a world class API, but it is a *neural networks* API. And while many statistical models can be framed as neural networks, there is another API that some prefer: the “matrix algebra API.” The good news is that TensorFlow 2.0’s new eager execution defaults mean that working with linear models in matrix form is easier than ever. Once you know a few caveats, it doesn’t feel so different than working in numpy. And this is great news!

In this post, we’re going to do multiple regression using Fisher’s Iris data set, regressing *Sepal Length* on *Sepal Width *and *Petal Length* (for no particular scientific reason) using TensorFlow 2.0. Yes, there is an official linear regression tutorial for TensorFlow 2.0, but it does not feature the matrix calculations (or explain the caveats) that this article will.

In matrix notation, we’ll be fitting the following model:

Where *y* is *Sepal Length*, *x* is *Sepal Width*, z is *Petal Length*, and are *i.i.d.* .

Let’s make this regression happen in Python using the *statsmodels* module:

import pandas as pd import numpy as np import statsmodels.api as sm import statsmodels.formula.api as smf # Part 1: OLS Regression on two predictors using statsmodels iris_df = sm.datasets.get_rdataset('iris').data iris_df.columns = [name.replace('.', '_') for name in iris_df.columns] reg_model = smf.ols(formula='Sepal_Length ~ Sepal_Width + Petal_Length', data=iris_df) fitted_model = reg_model.fit() fitted_model.summary()

This gives us the (partial) output:

=================================================================== coef std err t P>|t| ------------------------------------------------------------------- Intercept 2.2491 0.248 9.070 0.000 Sepal_Width 0.5955 0.069 8.590 0.000 Petal_Length 0.4719 0.017 27.569 0.000 ===================================================================

Now let’s spin up TensorFlow and convert our matrices and vectors into “tensors”:

import tensorflow as tf import patsy X_matrix = patsy.dmatrix('1 + Sepal_Width + Petal_Length', data=iris_df) X = tf.constant(X_matrix) y = tf.constant(iris_df.Sepal_Length.values.reshape((150, 1)))

We’re using constant tensors for our data vectors, and everything looks pretty straightforward here, but there is a spike-filled trap that we just stepped over. **Caveat #1 **is: *if you don’t reshape your vector y into an actual column vector, the following code will run but lead to incorrect estimates.* The fit leads to basically an intercept-only model, so that must mean that somehow the ordering is compromised unless there are actually two dimensions.

The next thing we’ll do is to create our variable tensor that will hold our regression weights. You could directly create a TensorFlow variable, but don’t. Instead, subclass from *tf.Module*:

class IrisReg(tf.Module): def __init__(self, starting_vector = [[0.0], [0.0], [0.0]]): self.beta = tf.Variable(starting_vector, dtype=tf.float64) irisreg = IrisReg()

I don’t love this, as it feels bureaucratic and I’d rather just work with a variable called “beta.” But you really need to do this unless you want to roll your own gradient descent. **Caveat #2 **is *not to bypass subclassing from tf.Module or else you will struggle with your optimizer’s .apply_gradients method*. By subclassing from tf.Module, you get a property *trainable_variables*, that you can treat like the parameter vector but it is also iterable.

The matrix math for the prediction is a little anticlimactic:

@tf.function def predict(X, beta): return tf.matmul(X, beta)

and the @tf.function decorator is optional for a performance benefit. The OLS regression loss is so simple that it’s also worth defining it explicitly (and I did have some trouble with the built-in losses, for full transparency):

@tf.function def get_loss(observed, predicted): return tf.reduce_mean(tf.square(observed - predicted))

While the loss function can be easily coded from scratch, there are too many benefits of using a built-in optimizer, like built-in momentum for gradient decent.

sgd_optimizer = tf.optimizers.SGD(learning_rate=.01, momentum=.98)

The rest of the training is presented in the following loop:

for epoch in range(1000): with tf.GradientTape() as gradient_tape: y_pred = predict(X, irisreg.trainable_variables) loss = get_loss(y, y_pred) gradient = gradient_tape.gradient(loss, irisreg.trainable_variables) sgd_optimizer.apply_gradients(zip(gradient, irisreg.trainable_variables)) print(irisreg.trainable_variables)

With eager execution enabled by default in TensorFlow 2.0, running “gradient tape” through the “forward pass” (i.e. prediction) is necessary to get the gradients. Notice that the *trainable_variables* property is used in place of the parameter vector in all situations. You could get away with a plain variable for every step until the optimizer’s *apply_gradients* method, and mixing and matching was causing trouble as well.

(<tf.Variable 'Variable:0' shape=(3, 1) dtype=float64, numpy= array([[2.24920008], [0.59551132], [0.47195184]])>,)

It’s not the most sophisticated training loop, but starting from an awful choice of starting vector, the procedure quickly converges to the OLS regression estimates. The loss function is easy to alter to create a Ridge Regression or LASSO procedure. And being in the TensorFlow ecosystem means that these techniques would scale to big datasets, be easily ported to JavaScript using TensorFlow.js, and made available to the TensorBoard debugging utilities.

It’s not just neural network enthusiasts who can gain from TensorFlow. Statisticians and other Data Scientists who prefer matrix manipulation can now really enjoy using TensorFlow thanks to the very cool eager enhancements in TensorFlow 2.0.