Linear Mixed Models in Tensorflow 2.0 via MacKay’s method

Despite the many successes of modern neural network toolkits like TensorFlow, one of the advantages of old-school tools like R’s lme4 is that a linear model can have different levels of regularization for subsets of variables. A user-level factor with thousands of levels would likely benefit from more regularization than a US state-level factor, for instance, and linear mixed models estimate those levels of regularization directly from the data. In a neural networks context, according to Geoff Hinton, learning multiple penalties using validation sets would be “very expensive to do.”

Professor Hinton’s statement comes from Lecture 9f of Neural Networks for Machine Learning, where he introduces MacKay’s “quick and dirty” method for using empirical Bayes to bypass the validation set when training neural networks. The slide from the course describing the method is shown below:

Lecture 9 slide describing MacKay’s method from Geoff Hinton’s course


In this article, were going to implement MacKay’s method in TensorFlow 2.0 using linear mixed models as an example. However, the law of total variance, gives the classical statistician some reason for concern. It’s the impetus for this Cross Validated question on why the variance of the predicted random effects from lme4 isn’t the same as the estimated random effects variance matrix. Though it feels like you’re seeing the actual random effects in lmer’s output, you’re actually seeing the predicted value of the random effect given the response, i.e., \text{E}(b_i \vert \mathbf{y}_i) for subject-specific random effect b_i and data vector \mathbf{y}_i.


From the Law of Total Variance,

    \[\text{Var}(b_i) = \text{E}(\text{Var}(b_i \vert \mathbf{y}_i)) + \text{Var}(\text{E}(b_i \vert \mathbf{y}_i)),\]


which means that if we follow MacKay’s recipe for estimating \text{Var}(b_i), we’re going to come up short. But our goal is effective regularization rather than publication of random effects variance estimates in Nature. So let’s give it a try.

Using lme4 on the sleepstudy data

Consider the sleepstudy example featured in R’s lme4 package:

library(lme4)
lme1 <- lmer(Reaction ~ 1 + Days + (1 + Days | Subject), sleepstudy)
summary(lme1)
head(ranef(fm1)[[1]])
Random effects:
 Groups   Name        Variance Std.Dev. Corr
 Subject  (Intercept) 612.09   24.740
          Days         35.07    5.922   0.07
 Residual             654.94   25.592
Number of obs: 180, groups:  Subject, 18

Fixed effects:
            Estimate Std. Error t value
(Intercept)  251.405      6.825  36.838
Days          10.467      1.546   6.771

> head(ranef(fm1)[[1]])
    (Intercept)       Days
308    2.258565  9.1989719
309  -40.398577 -8.6197032
310  -38.960246 -5.4488799
330   23.690498 -4.8143313
331   22.260203 -3.0698946
332    9.039526 -0.2721707

MacKay’s method on sleepstudy

The SleepReg class

The following examples will use the SleepReg class, an ad hoc subclass of tensorflow.Module specifically for implementing maximum likelihood (also GLS) estimation / prediction of fixed and random effects given variances for random effects and model errors. For an explanation of the TensorFlow 2.0 strategy and why inheriting from tf.Module is so important, refer to Multiple Regression in TensorFlow 2.0 using Matrix Notation.

The SleepReg class incorporates a (profiled) maximum likelihood loss of the form:

with tf.GradientTape() as gradient_tape:
    y_pred = self._get_expectation(X, Z, self.beta, self.b) 
    loss = (self._get_sse(y, y_pred) / self.sigmasq_epsilon
            + self._get_neg_log_prior(self.b, V))

This involves the sum of squared errors divided by the error variance plus the likelihood contribution of the latent random effects in _get_neg_log_prior (referred to as a “prior” to reflect the empirical Bayes interpretation). The latter quantity is a weighted sum of squares of the random effects, where the weight matrix V is a block diagonal of the inverse random effects variance matrices.

@tf.function
def _get_neg_log_prior(self, b, V):
    """Get the weight pentalty from the full Gaussian distribution"""
    bTV = tf.matmul(tf.transpose(b), V)                                                              
    bTVb = tf.matmul(bTV, b)
    return tf.squeeze(bTVb)

Reproducing lmer’s estimates in TensorFlow

The following shows TensorFlow 2.0 code capable of reproducing both the random effect predictions and fixed effect estimates of lmer, but without the routines to estimate the unknown variances such as REML. You’ll see that the optimization routine matches lmer’s output (to a high degree of accurracy) for both fixed effects estimates and random effects predictions.

from sleepstudy import SleepReg
import numpy as np

sleep_reg = SleepReg("/mnt/c/devl/data/sleepstudy.csv")

# Replicate lme4's result
off_diag = 24.7404 * 5.9221 * 0.066
lmer_vcov = np.array([[24.7404 ** 2, off_diag],
                      [off_diag, 5.9221 ** 2]])

sleep_reg.reset_variances(lmer_vcov, 25.5918 ** 2)

sleep_reg.train()
sleep_reg.set_optimizer(adam=True)
sleep_reg.train(epochs=300)

print(sleep_reg.beta)
print(sleep_reg.get_random_effects().head())
<tf.Variable 'Variable:0' shape=(2, 1) dtype=float64, numpy=
array([[251.40510486],
       [ 10.46728596]])>
          mu         b
0   2.262934  9.198305
1 -40.399556 -8.619793
2 -25.207478  1.172853
3 -13.065620  6.613451
4   4.575970 -3.014939

Implementing MacKay’s method

The loss function component _get_neg_log_prior in SleepReg uses a block diagonal matrix, V, which is non-diagonal if there are correlations between the random effects. MacKay’s proposed method uses the raw sum of squares of the weights, making for a very clean equation:

Lecture 9 slide describing Bayesian weight decay from Geoff Hinton’s course

While we go through MacKay’s “while not yet bored” loop, we’ll zero out the non-diagonals of V that result from non-zero covariances in the empirical variance matrix of the random effect predictions. What happens if you don’t? I wondered if you’d get a slightly less “quick and dirty” version of the algorithm, but the procedure actually bombs after a few iterations. You can see this yourself by commenting out the line with the diag function calls.

sleep_reg.zero_coefficients()
sleep_reg.reset_variances(np.array([[410, 10], [10, 22]]),
                         .25 * np.var(sleep_reg.y))
sleep_reg.set_optimizer(adam=False)

for i in range(100):
    sleep_reg.train(display_beta=False)
    
    sigmasq_epsilon = sleep_reg.estimate_sigmasq_epsilon()
       
    V = sleep_reg.get_rnd_effs_variance()
    V_diag = np.diag(np.diag(V)) # comment out and watch procedure fail

    sleep_reg.reset_variances(V_diag, sigmasq_epsilon)

    print(V_diag)
    print(sigmasq_epsilon)

print(sleep_reg.beta)
print(sleep_reg.get_random_effects().head())
--- last V_diag
[[302.9045408    0.        ]
 [  0.          31.08902388]]
--- last sigmasq_epsilon
[670.8546961]
--- final estimate of fixed effect beta
<tf.Variable 'Variable:0' shape=(2, 1) dtype=float64, numpy=
array([[251.40510485],
       [ 10.46728596]])>
--- final random effects predictions
          mu         b
0   2.013963  9.147986
1 -32.683526 -9.633964
2 -20.255296  0.459532
3 -10.372529  6.169903
4   3.618080 -2.851007

Discussion

As foretold by the law of total variances, the random effects variance estimates from MacKay’s method are low, the variance of the random intercepts coming in at right under half of lmer’s estimated variance of 612 at 303. Whether or not it’s a coincidence, the empirically estimated variance of the random slopes was 31, much closer to the lmer estimated value of 35. The poorer random effect predictions led to a slightly larger error variance of 671 vs lmer’s 655, but still relatively close.

Even with the inadequacies in variance estimation, the fixed effects estimates produced by the MacKay method are much closer to lmer’s than they would be to an OLS regression treating subjects as fixed factor levels. The random effect predictions themselves are shrunken down too much also quite close for some subjects. The procedure, true to its name, is quick and dirty, but it works.

That the procedure breaks down from even a slight deviation from independent random effects is a mystery to me, however. Underestimates of the random effects variances seem to be forgiven (at least in algorithm stability) if there is no link between them. Perhaps another improvement for the method, that might even allow correlations, is to multiply the empirical variances by a scaling factor greater than one. But what would that factor be so as not to make the method even “dirtier”?

I have a vision of a toolkit as powerful as TensorFlow but one that offers the same basic inferential tools and benefits of classical Statistics. That vision is still far from a reality, but components were explored in this article. I thought it was cool how teaming stochastic gradient descent with the adam optimizer really locked in mixed model parameter values quickly (when starting far away from the MLE estimates, adam moves very slowly). Whether or not MacKay’s method will find it into my standard modeling toolkit is yet to be seen, but my curiosity regarding the method is only enhanced by these latest experiments.


Introducing “datascroller” for fast terminal data frame scrolling

Category : Python , Tools

I’m excited to announce my very first package on pypi, datascroller, a Python package for interactive terminal data scrolling. It’s available for Windows as well as *nix systems (thanks to the windows-curses package), and there are issues for outside contributors on the datascroller Github repo.

How it works

See the gif below for a glimpse of datascroller in action:

datascroller allows terminal datascrolling

During that demo, I was pressing keys to resize the terminal viewing window and to scroll from left-to-right and up-to-down within a Pandas data frame. Currently the scrolling keys are inspired by vim but later versions will offer customization options.

You can install datascroller with pip using:

pip install datascroller

Try datascroller out in iPython with the following code:

import pandas as pd
from datascroller.scroller import scroller

train = pd.read_csv(
    'https://raw.githubusercontent.com/datasets/house-prices-uk/master/data/data.csv')

scroller(train)

Why a terminal data scroller?

Scrolling a through data is a fundamental part of exploratory data analysis, and we’ve all had open-source tools let us down. My first experience with industrial-grade data scrolling came with using SAS at the turn of the century. Even then, you could scroll through tens of millions of rows on your 386DX through what must have been a very clever paging strategy. Say what you want about SAS, but honestly no other data viewer since then has beat it for me.

Moving to R around 2009, I had to accept the loss of SAS’s data set viewer and learn to accept the built-in viewer or just print slices of the data frame in the console. Around 2010, I started using RStudio and was impressed with their viewer, but it still couldn’t hold a candle to SAS’s and didn’t handle very large data sets well at the time (to the best of my recollection).

In 2019, RStudio may very well have their terminal viewer tuned to perfection. Even so, there are still some of us who find full-blown IDEs and even notebooks bulky and not worth the hassle. Like electric sunroofs, they’re just one more thing to break; sometimes rolling down your windows is good enough. That’s why, until the day I die or completely blind (more to come), I’ll be typing into a terminal.

The problem with working with data in a terminal is that you often don’t have access to graphical displays (without complicated setups) and you end up having to print slices of your data sets in the terminal for exploratory analysis. This slows you down! And while R’s tibble and Panda’s DataFrame are smart enough to not overwhelm your console with output, they make you work to see the parts of the data that you really need to see.

The datascroller vision

The featured image is a play on the movie “Minority Report” and its very memorable scene with Tom Cruise’s character using the futuristic API to sort through information. I always wanted to move around the data set like that, and I felt that the terminal would be a good place to do it. In 2014, at Google, I took my first crack at this with an internal R package I called “terminalR.” I got helpful feedback from mentors there, especially Tim Hesterberg, which I plan to incorporate into datascroller. The problem with terminalR was that you had to “drum” on the enter key while you used it (it relied on standard console input methods), which was corny. But Python offers the curses library, allowing my interactive “vision” to come true.

What’s next for datascroller?

The Python package datascroller, currently for use with Pandas dataframes, will become the tool “datascroller” for general purpose terminal data scrolling. Imagine interactive terminal scrolling of any csv, text, or even JSON file that can be initiated from outside of Python. And I’m trying to convince my friend John Merfeld, who makes extensive use of low vision accessibility tools, to help me light this thing up like a Christmas Tree to make data scroller itself an accessibility tool.

I have big plans for this tool.


Multiple Regression in TensorFlow 2.0 using Matrix Notation

Category : Tools

While the two were always friendly, TensorFlow has fully embraced the Keras API in version 2.0, making it THE high-level API and further tightening its integration into the platform’s core. From tutorials, videos, press releases, the message is resounding: use the Keras API, unless you absolutely, positively, can’t.

All signs point to the Keras API as being a world class API, but it is a neural networks API. And while many statistical models can be framed as neural networks, there is another API that some prefer: the “matrix algebra API.” The good news is that TensorFlow 2.0’s new eager execution defaults mean that working with linear models in matrix form is easier than ever. Once you know a few caveats, it doesn’t feel so different than working in numpy. And this is great news!

In this post, we’re going to do multiple regression using Fisher’s Iris data set, regressing Sepal Length on Sepal Width and Petal Length (for no particular scientific reason) using TensorFlow 2.0. Yes, there is an official linear regression tutorial for TensorFlow 2.0, but it does not feature the matrix calculations (or explain the caveats) that this article will.

In matrix notation, we’ll be fitting the following model:


    \[\left[\begin{matrix} y_1 \\y_2 \\\ldots \\y_{150} \end{matrix}\right] =  \left[\begin{matrix} 1 & x_1 & z_1 \\1 & x_2 & z_2 \\\ldots \\1 & x_{150} & z_{150} \end{matrix}\right]   \left[\begin{matrix} \beta_0 \\\beta_1 \\\beta_2 \end{matrix}\right]  +  \left[\begin{matrix} \epsilon_1 \\\epsilon_2 \\\ldots \\\epsilon_{150} \end{matrix}\right],\]


Where y is Sepal Length, x is Sepal Width, z is Petal Length, and \epsilon_1, \ldots \epsilon_{150} are i.i.d. N(0, \sigma^2).

Let’s make this regression happen in Python using the statsmodels module:

import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf

# Part 1: OLS Regression on two predictors using statsmodels
iris_df = sm.datasets.get_rdataset('iris').data
iris_df.columns = [name.replace('.', '_') for name in iris_df.columns]
reg_model = smf.ols(formula='Sepal_Length ~ Sepal_Width + Petal_Length',
                    data=iris_df)
fitted_model = reg_model.fit()
fitted_model.summary()

This gives us the (partial) output:

===================================================================
                   coef    std err          t      P>|t|
-------------------------------------------------------------------
Intercept        2.2491      0.248      9.070      0.000
Sepal_Width      0.5955      0.069      8.590      0.000
Petal_Length     0.4719      0.017     27.569      0.000
===================================================================

Now let’s spin up TensorFlow and convert our matrices and vectors into “tensors”:

import tensorflow as tf
import patsy                                                                                                                                                            
X_matrix = patsy.dmatrix('1 + Sepal_Width + Petal_Length', data=iris_df)

X = tf.constant(X_matrix)
y = tf.constant(iris_df.Sepal_Length.values.reshape((150, 1)))

We’re using constant tensors for our data vectors, and everything looks pretty straightforward here, but there is a spike-filled trap that we just stepped over. Caveat #1 is: if you don’t reshape your vector y into an actual column vector, the following code will run but lead to incorrect estimates. The fit leads to basically an intercept-only model, so that must mean that somehow the ordering is compromised unless there are actually two dimensions.

The next thing we’ll do is to create our variable tensor that will hold our regression weights. You could directly create a TensorFlow variable, but don’t. Instead, subclass from tf.Module:

class IrisReg(tf.Module):
    def __init__(self, starting_vector = [[0.0], [0.0], [0.0]]):
        self.beta = tf.Variable(starting_vector, dtype=tf.float64)

irisreg = IrisReg()

I don’t love this, as it feels bureaucratic and I’d rather just work with a variable called “beta.” But you really need to do this unless you want to roll your own gradient descent. Caveat #2 is not to bypass subclassing from tf.Module or else you will struggle with your optimizer’s .apply_gradients method. By subclassing from tf.Module, you get a property trainable_variables, that you can treat like the parameter vector but it is also iterable.

The matrix math for the prediction is a little anticlimactic:

@tf.function
def predict(X, beta):
    return tf.matmul(X, beta)

and the @tf.function decorator is optional for a performance benefit. The OLS regression loss is so simple that it’s also worth defining it explicitly (and I did have some trouble with the built-in losses, for full transparency):

@tf.function
def get_loss(observed, predicted):
    return tf.reduce_mean(tf.square(observed - predicted))

While the loss function can be easily coded from scratch, there are too many benefits of using a built-in optimizer, like built-in momentum for gradient decent.

sgd_optimizer = tf.optimizers.SGD(learning_rate=.01, momentum=.98)

The rest of the training is presented in the following loop:

for epoch in range(1000):

    with tf.GradientTape() as gradient_tape:
        y_pred = predict(X, irisreg.trainable_variables)
        loss = get_loss(y, y_pred)

    gradient = gradient_tape.gradient(loss, irisreg.trainable_variables)
    sgd_optimizer.apply_gradients(zip(gradient,
                                      irisreg.trainable_variables))

    print(irisreg.trainable_variables)

With eager execution enabled by default in TensorFlow 2.0, running “gradient tape” through the “forward pass” (i.e. prediction) is necessary to get the gradients. Notice that the trainable_variables property is used in place of the parameter vector in all situations. You could get away with a plain variable for every step until the optimizer’s apply_gradients method, and mixing and matching was causing trouble as well.

(<tf.Variable 'Variable:0' shape=(3, 1) dtype=float64, numpy=
array([[2.24920008],
       [0.59551132],
       [0.47195184]])>,)

It’s not the most sophisticated training loop, but starting from an awful choice of starting vector, the procedure quickly converges to the OLS regression estimates. The loss function is easy to alter to create a Ridge Regression or LASSO procedure. And being in the TensorFlow ecosystem means that these techniques would scale to big datasets, be easily ported to JavaScript using TensorFlow.js, and made available to the TensorBoard debugging utilities.

It’s not just neural network enthusiasts who can gain from TensorFlow. Statisticians and other Data Scientists who prefer matrix manipulation can now really enjoy using TensorFlow thanks to the very cool eager enhancements in TensorFlow 2.0.


Using the WordPress Template Hierarchy to Improve a Resume Plugin

Every Sole Proprietor’s website needs a resume, and given the expansiveness of the WordPress plugin ecosystem, I expected to see a half dozen to choose from. In reality, there’s only one with any amount of attention, Resume Builder from Boxy Studio.

I’m still working on the resume content, but the auto-formatted layout is not bad. The “star ratings” for skills are maybe a little corny but fun.

The plugin adds a “Resumes” section to your admin console screen, and when you add a new resume you’re treated to a very structured layout.

Don’t expect to drag and drop sections like they’re blocks in a WordPress editor. The plug in works, but doesn’t have that level of polish.

A resume that looks like a blog entry

The problem I encountered was that the resume looked like this:

Ridiculous! One copy of my mug per page is plenty. You can see from top left corner of the image that there’s a bit of WordPress debugging going on (live on the website, of course). I just couldn’t figure out why it was displaying like a blog article.

Well, if it looks like a duck… OK, when I’m editing my resume, the URL ends with post.php?post=75&action=edit. So resumes are just posts with a hard coded ID of 75. This explains why the shortcodes have 75 in them (e.g. rb-resume id="75" section="intro"). That left me with a dilemma, because I want at least the date on the blog articles (not sure about the mug), but I don’t want it on the resume.

Working harder, I echoed the post type in single.php by adding the following after the header:

<?php echo get_post_type(); ?>

When I clicked on the link for an regular blog article (which you have to do to trigger single.php. Going to the Blog link isn’t enough!), I’d see “post” echoed on the screen. But when I clicked on the resume, I saw “rb_resume.” Resume Builder is using a post type of “rb_resume.”

A Single.php for different post types

To solve the problem, I copied my single.php file into a new file called single-rb_resume.php. Since single.php calls another template part, I had to copy that file as well. And then I started hacking, and more in the machete sense than the computer programming one. Finally I was able to pull away the parts that made an individual blog post look like a blog post, and I arrived at the following:

Of course, instead of actually finishing the resume, I left satisfied with solving the templating problem; it’s still a work in progress.

The template hierarchy is a pretty scary part of WordPress. I can’t even begin to say I understand it, but I know that it’s there, and this situation has proven that exploiting the hierarchy can be a very powerful technique.


Data Science Apps in the Cloud with Heroku

Category : Infrastructure

I recently had an opportunity to work with Heroku, a platform-as-a-service for deploying and running apps, for deploying Python-based data science applications in the cloud. At first, I didn’t understand why the engagement wasn’t just using AWS, since data science related instances abound on the EC2 marketplace. What I learned however is that AWS can be a money pit for businesses without a dedicated IT team. It is a complex beast that requires competent professionals to tame. Heroku, on the other hand, just seems to just work.

In this article, I’ll go through the basics of creating a Heroku application that at least loads popular data science dependencies in python. In later articles I may take the example to the end, where I load the Iris data set, run a regression on it using the statsmodels package, and write the results into a database on Heroku. All of this can be run using Heroku’s very simple free scheduler.

Preliminaries

To get started, create a free Heroku account at www.heroku.com, install the Heroku CLI, and run the following commands in a bash shell (Windows 10 users are encouraged to use the Ubuntu 18.04 app):

git clone https://github.com/baogorek/HerokuExample.git
cd HerokuExample
heroku login
heroku create

After cloning in the first line, the second line changes directories to the folder that contains the Heroku application, and the third line opens a browser window to log into your Heroku account. Finally, heroku create registers the app with the service. The output of that line is:

Creating app... done, ⬢ mysterious-badlands-45487                              https://mysterious-badlands-45487.herokuapp.com/ | https://git.heroku.com/mysterious-badlands-45487.git

which shows us that it is given a name, a URL, and its own git repository. If you navigate to the URL, you get a default welcome screen, but we won’t be building a web app in this article.

The git repository is interesting, because it seemed like we already had one. But this is a git remote hosted by Heroku itself, and it’s a big part of their deployment strategy. If I run a git remote -v, I can see it:

heroku  https://git.heroku.com/mysterious-badlands-45487.git (fetch)
heroku  https://git.heroku.com/mysterious-badlands-45487.git (push)
origin  https://github.com/baogorek/HerokuExample.git (fetch)
origin  https://github.com/baogorek/HerokuExample.git (push)

Even though I haven’t added anything new through git, I can deploy the app that I have through pushing to the heroku remote:

git push heroku master

Just that simple command sets off a lot of activity. Here is a selection of the output:

remote: Compressing source files... done.                                      remote: Building source:                                                       remote:                                                                        remote: -----> Python app detected                                             remote: -----> Installing python-3.6.8    
...
remote:        Installing collected packages: numpy, scipy, six, python-dateutil, pytz, pandas, patsy, cython, statsmodels, psycopg2-binary   
...
remote: -----> Launching...                                                    remote:        Released v3                                                     remote:        https://mysterious-badlands-45487.herokuapp.com/ deployed to Heroku  

Pushing to the “heroku” remote triggered the build of a Python application with data science dependencies such as numpy, scipy, pandas, and statsmodels. We see at the end that the app was “deployed.”

Testing it out

Since Heroku is based on containers, one quick way to test that our app has the data science dependencies that we think it does is to spin up a local container. We can do that with:

heroku run bash
python

In Python 3.6.8 within our local Heroku container, we can import a few packages just to make sure.

import statsmodels
import pandas

If you didn’t get an error, then your cloud-deployed Heroku app has these data science dependencies installed. Good!

Getting the dependencies

Exiting out of the local Heroku container, look inside the requirements.txt file:

cat requirements.txt

You’ll see a very modest text file with the following lines:

numpy
scipy==1.2
pandas
patsy
cython
statsmodels
psycopg2-binary

I specified scipy to be exactly version 1.2 based on advice from a post on a problem I was having, but otherwise these are the minimum dependencies specified by statsmodels.

Why not conda?

There are some Heroku “buildpacks” for conda online, but many of them are years old and not well maintained. Using the requirements.txt file was a breeze, and I didn’t see a reason to struggle with getting conda to work. But it clearly is possible.

Running jobs

If loading statsmodels and pandas in a local Heroku container didn’t send your pulse above 100, it’s not you. But we’re actually not too far away from making our Heroku app do things. One way to actually have your app act is to utilize the text file called Procfile (no “.txt” extension). If you look inside the Procfile for this app, it is completely blank.

Instead, I used the Heroku scheduler add on to run a file, like HerokuExample/herokuexample/run_glm.py. You can see how easy it is to set up by looking at the following screenshot:

Since you could potentially spin up some serious computing resources using the Heroku scheduler add on, you do need your credit card to enable it.

Running a script

Just so we see some output in this article, add the following line to the Procfile:

release: python herokuexample/say_hello.py

Add the file to git staging, commit it, and then push to the heroku remote:

git add Procfile
git commit -m "Updating Procfile"
git push heroku master

Among the output lines you will find:

remote: Verifying deploy... done.
remote: Running release command...
remote:
remote: I loaded statsmodels

Indeed it did.

Next steps

To really do something interesting without a full blown web app, we really need a database. Fortunately, Heroku has powerful database add ons that can complete the picture of a useful data science application that runs in the cloud. Leave a comment if you want to hear about Heroku databases in conjunction with data science apps, and I’ll add it to the queue.


Getting a business bank account for a sole prop with a DBA

Category : The Journey

After applying for a DBA, I was confused whether or not I had actually received one, given the WI Register of Deeds sent back my filled out Registration of Firm Names form with a computer generated stamp on the top right corner. Surely that was just a receipt of payment, and something called a “Certificate of Assumed Name” was coming in the mail, right?

At least, that’s the idea I got based on interactions with “Bank A” (no reason to name names), my bank of over 10 years. Banker 1 from Bank A told me that I could only open a sole proprietor in my own name. I told him that I wanted a different name on the account, “Ogorek Data Sciences,” and that I was pursuing a Doing Business As (DBA) in order to do that. He seemed unsure but told me he could help me when “the paperwork” arrived. When the Registration of Firm Names form came back stamped, I sent Banker 1 a picture of the form. He responded that he absolutely needed the Certificate of Assumed Name to open the account.

So I waited, and watched the mailbox, and waited…

After a month of waiting, I went down to the Register of Deeds in person and asked what was going on. “We don’t give out certificates!” the woman behind (what I’m pretty sure was bulletproof) glass told me. “This is all you should need to open a bank account” and she gave be a printed Wisconsin.gov web page to prove it. Not convinced that Banker 1 knew the laws of the land, I made an appointment with Banker 2 (still of Bank A), armed with newfound confidence in my form and a printed out government web page with the exact instructions that I had followed. These instructions were clear.

Unfortunately, Banker 2 didn’t know much more than Banker 1. Again, she wanted a Certificate of Assumed Name, and I told her that I had fulfilled the requirements for the DBA according to the Wisconsin Register of Deeds and showed her the Wisconsin.gov printout. She still didn’t buy it and sent a photocopy of my form off to business documents review, which was supposed to take 24-48 hours.

Some 60 hours later, without a verdict, I went to “Bank B.” To Bank B’s credit, in an hour I had a business bank account, but a few things still bothered me. First, this third banker (“Banker 3”!) claimed that my Registration of Firm Names form was completely unnecessary, or at least he had been opening up accounts without them. He also told me that he didn’t think I could use an EIN for a sole prop (which I got a few weeks ago), but then he looked it up and realized I could use the EIN. “Hey, I learned something new!” Glad I could help.

To be fair to the first two bankers from Bank A, they were actually right about the account having to be opened in my name, but what they didn’t tell me was that I could still deposit checks written to the DBA name and the DBA name could appear on that account’s checks. If they did know this, then I suppose some blame lies with me for insisting on the account being opened in the DBA name. But come on, why else would I be so adamant about an account name?

And, to be fair to Banker 3, it turns out that the printed out Wisconsin.gov web page – that the Wisconsin Register of Deeds gave me in person – was from March of 2013. Maybe registering the Sole Prop is no longer required to open a bank account. And at least Banker 3 knew enough to convince me that I did indeed need to open the account in my own name, but with a DBA name attached to the account. (Seeing him type “Ogorek Data Sciences” after the “DBA” field in the application form, which made me feel better.)

It doesn’t seem like it should have been so hard. But, I’ve got a business bank account set up with a DBA name (“Ogorek Data Sciences”) and using an EIN. Mission accomplished.


Getting Health Insurance as a Freelancer

Category : The Journey

  • The Individual Mandate is no longer in effect as of 2019
  • Clicking the sponsored health care links on Google is bad
  • Individual health insurance that protects your wealth is inexpensive

Disclaimer: This author claims no special experience or knowledge of health insurance, other than the experience of losing corporate provided health insurance and having to buy new insurance (and screwing up a little along the way).

When you talk to someone who likes the idea of leaving their corporate job, two times out of three the person will use health insurance as the reason they can’t do it. When I did it this year, I’ll admit that I was worried about it myself, and back in 2015 when I took some time off, I paid around $300 a month for Affordable Care Act (ACA) qualifying health insurance that basically did nothing for me beyond wealth protection in the case of a really big bill. And it also let me avoid the individual mandate penalty on 2015 taxes.

Fast forward to June 1st, 2019, the first date I would lose corporate coverage, I was prepared to eat the $300 (or more) per month once again. Since I was losing coverage, I went on www.HealthCare.gov to try to apply during the enrollment off season. First, I wasted a lot of time by answering questions that could lead to discounts (they won’t for any remotely comfortable income, even if earned earlier in the year). Then, over a period of five days, the site was down and couldn’t even process my application. I thought I might have to eat one month of COBRA insurance for over $600 from my last job. Then I realized that the individual mandate is gone, at least at the federal level.

This is due to the Tax Cuts and Jobs act that was passed in December 2017 and eliminated the individual mandate penalty, effective January 1, 2019. It’d be a good idea to search for any law particular to your state, but nothing stood out as too powerful to me. It’s gone.

I have nothing to say about policy, or whether this is good or bad for the country, but I won’t buy expensive health insurance as a freelancer right after losing most of my income, especially when the site to do it isn’t even working.

After realizing I could just buy insurance from anyone, I got in a hurry and searched Google for “buying health insurance online.” Thinking that the top site was some kind of Kayak-like engine for comparison shopping health insurance, it clicked it. Do not do this. I will not write down the link out of fear that I will help these sites’ rankings, but I will show you a picture of what not to click:

The first website actually looked quite clean and professional, and made it easy for me to answer the few, very reasonable questions. I clicked submit.

Within, and I’m not kidding, 5 seconds, my phone started to ring. That’s when I knew I screwed up. Over the next three weeks, I got hundreds of phone calls and dozens of voicemails and emails. Telling them to stop doesn’t work:

It was pretty bad, but in the end it was only digital communications and not impossible to ignore (if you physically turned a few things off). Still, don’t do it.

In the end, I went to a website of a name I’ve seen before, United Health One, who’s offering non-ACA compliant “short-term” insurance. At first I didn’t understand what that meant, but when you’re buying the insurance you can specify different term lengths, and that seems to be about it. These policies are not ACA-compliant because they do almost nothing for you unless you’re out a lot of money (explaining the term “junk insurance”). But that’s okay for me.

What I really want is wealth protection in case something really bad happens and I owe a hospital $700,000, and thus I went for the highest maximum benefit of $2,000,000. The deductible situation is a little complicated because there are two of them. Under the first one the company will not pay anything, and in the sweet middle ground between the first deductible and second, they will pay some. I wasn’t that interested in that middle spot so I minimized the benefits there.

The UnitedHealthOne website (www.uhone.com) almost felt like one of those old school restaurants that offered so many million menu items by sheer combinatorics of the options, which I didn’t like so much. And it annoyed me a little bit that the company name changed from UnitedHealthOne to “Golden Rule” when it was time to buy the policy. But it was relatively easy to make the final purchase, I felt I got what I needed, and it was under $100 a month.

I’ll repeat the disclaimer: I have no idea what I’m doing. Maybe I do rack up $700,000 of medical expenses and then $400,000 of those expenses are denied by “Golden Rule.” Who knows? But I do know one thing: if you don’t click on one of those links I showed you, you can buy health insurance at a very reasonable rate without getting attacked by vultures. Sometimes, that’s all you can ask for.


Ogorek Data Sciences has an EIN

Category : The Journey

Most of the times, things are harder than they look, especially when those things involve the IRS. But I am pleasantly surprised and happy to say, getting an Employer Identification Number from www.irs.gov was amazing easy. IRS.gov has a nice page linking you to the online form, and after filling out the questions, you get the EIN in an instant. You can use the number immediately for most purposes (e.g., opening a bank account). For certain tax purposes, you’ll have to wait about two weeks. But not bad.

Why get an EIN as a sole proprietor?

Let’s be honest, I use the name “Ogorek Data Sciences” in blog post titles mostly in tongue-in-cheek fashion. This EIN is, for the foreseeable future, the Employer Identification Number of Ben Ogorek, the employer of himself. Furthermore, this sole proprietor has a Social Security Number that would work just fine for tax purposes. So why bother with it?

On the other hand, if I’m getting the boat-loads of 1099-MISC forms that I’m expecting in the near future, that’s a lot of Ben Ogorek-SSNs floating around on paper documents in the world. I’d rather replace those with the EIN. And, although I’m not in need of the “corporate veil” necessary for an LLC, it’s nice to practice treating the business as a separate entity with its own tax Id.

Cross it off the list!


WordPress theme changed to Enigma

Category : The Journey

I’ll never forget the response to the first “business website” I made. It was around 2009 and, fresh off an inspiration high from the 4-Hour Workweek, I cobbled together a comical-looking effort to siphon money out of the global economy. Despite not knowing anything about web technologies, I decided to build the website with VB.net and the free Microsoft development tools. In retrospect, I’m surprised I even got the site to work.

While WordPress surely existed back then, I didn’t know about it. Now, during my second at attempt at my own business, not only is the plan to actually create value, but to buy into trusted framework. I’ve been really impressed with how easy it is to get WordPress up and running, how cheaply it is to host, and how much it can look like a modern professionally-designed custom Website.

This morning, https://www.ogorekdatasciences.com looked like this:

After switching to the Enigma theme and doing just a bit of customization, the site looks like this:

I’m no marketer; some of the one-liners are a little goofy. But it’s a step in the right direction!


Applying for a DBA in Wisconsin

Category : The Journey

One of the choices I had to make as a data science freelancer was whether or not to register as a single-member LLC or proceed as a sole proprietor. A lot of people told me I should get the LLC for tax reasons, for example, to pay yourself a smaller income and take the remainder of the profits as a corporate distribution with a lower tax rates. However, all my research brought me to the conclusion that the type of LLC I’d be applying for, the single-member LLC, is a “pass-though entity,” and all of the income would pass through to my own personal income anyway.

Now the limited liability feature of the LLC is real, provided I’d be able to maintain a “corporate veil,” which I wasn’t sure I was ready to do at this early stage. In July, I’m thinking about forwarding my personal mail to my business address, for instance. So much for separation.

Since I still wanted the experience of creating a business that was not just my given name, I decided to go the DBA route. The DBA acronym stands for “Doing Business As” and allows a sole proprietor like me to create a legal business sounding name without the fees and hassle of an LLC. The DBA is necessary for opening bank accounts in that name as well (apparently you could get in trouble for using a business name that was not registered).

Applying for the DBA wasn’t hard, but like most things government related, you just have to know what to do. LegalZoom will charge you one hundred and change to take the guess work out of it, but if you just go to your state’s process, the instructions are usually pretty simple. In Wisconsin, you file a “Registration of Firm Name” with the county’s Register of Deeds. The instructions are pretty simple.

A week and a half ago, I sent in my notarized application requesting “Ogorek Data Sciences” as my sole proprietorship’s official along with the $30 dollar fee. The name’s not Don Draper grade, but I struggle with names and I asked myself, “do you want to waste time thinking of an awesome name, or do you want to get started?” And while I’ve always thought my last name came across as weird to people, a friend who makes fun of everything didn’t really laugh at the name, and another one commented that it’s pretty easy to say once you know how to say it. It’s pretty close to the sound of the name of the Oreck XL vacuum cleaner. Done.

About five days later, I got my same application back from the Register of Deeds but with the recording area stamped, showing that my payment was accepted. At first I asked, “is this it?” After some reading I found it can take a little longer than a couple days, and I take the returned application as a confirmation that my money is good at the Register of Deeds and my application has been sent to the next level. Godspeed.

Update: There was no “next level.” That was it. The Register of Deeds just recorded the firm name (I believe without any searching or anything.) This caused me some confusion when trying to use the form to get a business bank account with a DBA.