MLCC: Logistic regression

I am working through Google’s Machine Learning Crash Course. The notes in this post cover the â€śLogistic Regression” module.

“Logistic regression” generates a probability (a value between 0 and 1). It’s also very efficient.

Note the glossary defines logistic regression as a classification model, which is weird since it has “regression” in the name. I suspect this is explained by “You can interpret the value between 0 and 1 in either of the following two ways: … a binary classification problem … As a value to be compared against a classification threshold …”

The “sigmoid” function, aka “logistic” function/transform, produces a bounded value between 0 and 1.

Note the sigmoid function is just y = 1 / 1 + e ^ - 𝞼 where 𝞼 is our usual linear equation. I suppose we’re transforming the linear output into a logistic form.

Regularization (notes) is important in logistic regression. “Without regularization, the asymptotic nature of logistic regression would keep driving loss towards 0 in high dimensions”, esp L2 regularization and stopping early.

The “logit”, aka “log-odds”, function is the inverse of the logistic function.

The loss function for logistic regression is “log loss”.

One thought on “MLCC: Logistic regression

Comments are closed.