# MLCC: Classification

I amÂ working through Googleâs Machine Learning Crash Course. The notes in this post cover theÂ âClassificationâÂ module.

New metrics for evaluating classification performance:

• Accuracy
• Precision
• Recall
• ROC
• AUC

## Accuracy

“Accuracy” simply measures percentage of correct predictions.

It fails on class-imbalance, aka âskewed classâ, problems, though. Neptune AI states is bluntly: âYou shouldnât use accuracy on imbalanced problems.â Heuristic: is the percent accuracy > the imbalance? For example, if a population is 99% disease-free, an accuracy of 99% requires no intelligence. This is called the âaccuracy paradoxâ. Precision and recall are better suited to class-imbalance problems.

Tip: calculate odds independently if possible to compare with accuracy.

## Confusion matrix

A âconfusion matrixâ, aka âclassification matrixâ, quantifies predicted vs actual outcomes, which is useful for evaluating model performance.

A false positive is a âtype oneâ error. A false negative is a âtype twoâ error. When the cost of error is high, type two must be minimized. In other words, when the cost of error is high, maximize recall.

## Precision and recall

Andrew Ngâs âLecture 11.4 â Machine Learning System Design | Trading Off Precision And Recallâ provides a helpful phrasing:

• Precision = true positive / predicted positive
• Recall = true positive / actual positive

Regarding the accuracy paradox, if a model simply predicts negative all the time (eg because 99% of email isnât spam), it will fail recall and precision because it never has a true positive.

Wikipedia makes a point: âIt is trivial to achieve recall of 100% by returning all documents in response to any queryâ

Precision and recall are important, and in tension. Classification depends on a âthresholdâ. Increasing the threshold increases precision, but decreases recall. Wikipedia uses surgery for a brain tumor to illustrate: a conservative approach increases the risk of false negative; an aggressive approach increases risk of false positive. Plotting the âprecision-recall curveâ can also help demonstrate the relationship, as demonstrated by Andrew Ng.

## ROC and AUC

The “ROC curve” helps identify the best threshold.

“AUC” compares ROCs, helping identify the best model.

StatQuestâs âROC and AUC, Clearly Explained!â states precision is a better metric than the false positive rate for class imbalance problems because it doesnât take true negatives into account.

Keras gives us AUC for a model, but whatâs the corresponding threshold? The crash course clarifies: âAUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.â Ok, then why use anything but AUC? Neptune AI summarizes: â… use it when you care equally about positive and negative classes.â

## Prediction bias

Seems like this is another way of quantifying model performance. If we know a probability of occurrence and the model produces a significantly different probability, that indicates somethingâs amiss.

The formal definition is: average predicted occurrence – average actual occurrence. Thereâs a helpful note that a model simply returning the average occurrence would have zero prediction bias, but would still be a bad model.

The crash course gives a few causes for bias. StatQuestâs âMachine Learning Fundamentals: Bias and Varianceâ adds another: the inability of a ML algorithm to capture the true relationship between features and labels, eg linear regression trying to capture a curved relationship.

Fix prediction bias in the model, rather than adjusting the model output.

Interesting clarification that predicted values are a probability range, but actual values are discrete, so we need to segment values and average them to make a comparison.

# MLCC: Regularization

I amÂ working through Googleâs Machine Learning Crash Course. The notes in this post cover theÂ âRegularizationâ module.

An earlier module focused on generalization (notes). A âgeneralization curveâ visualizes generalization by showing loss for training data vs loss for validation data.

When training loss is less than validation loss, weâre âoverfittingâ to the training data, reducing generalization.

âRegularizationâ is the process of preventing overfitting. The TensorFlow docs also discuss regularization.

âEmpirical risk minimizationâ refers to loss reduction using tools like gradient descent (notes).

âStructural risk minimizationâ refers to regularization by minimizing the complexity of the model.

The âL2 regularizationâ formula quantifies complexity as the sum of the squares of the feature weights.

âLambdaâ aka âregularization rateâ governs the amount of regularization applied. Increasing lambda strengthens regularization, resulting in a steeper histogram of weights, for example. A tool called Vizier can help optimize lambda.

Helpful phrasing from StatQuestâs “Machine Learning Fundamentals: Bias and Variance”: regularization is one technique for finding a balance between a simple model (that may have high bias) and a complex model (that may have high variability).

## Exercise 1

The answer for task 1 in the first exercise, notes the ârelative weightâ of lines from FEATURE to OUTPUT in the playground. What is “relative weight”? đ¤ Later, the second exercise mentions âThe relative thickness of each line running from FEATURES to OUTPUT represents the learned weight for that feature or feature cross. You can find the exact weight values by hovering over each line.â So, ârelative weightâ in this context is just referring to the weight of one line relative to another, rather than a novel concept.

The answer for task 1 states: âThe lines emanating from X1 and X2 are much thicker than those coming from the feature crosses. So, the feature crosses are contributing far less to the model than the normal (uncrossed) features.â Task 2 states âIf we use a model that is too complicated, such as one with too many crosses …â Later, we learn âIf model complexity is a function of weights …â Is complexity a function of crosses or weights? đ¤  I guess the idea is that the additional complexity of the crosses was driving up the weight of the uncrossed features, irrespective of regularization. Running the playground with and without the cross supports this, eg ~1.5, 0.131 and 0.033, respectively, vs ~0.9 with losses 0.096 and 0.039. Running with the cross and 0.3 regularization results in ~0.3, 0.092 and 0.059. Running with just 0.3 regularization results in ~0.3, 0.093 and 0.061. So it would seem there are at least a couple, orthogonal components to âcomplexityâ.

## Exercise 2

An answer in the playground mentions: âWhile test loss decreases, training loss actually increases. This is expected, because you’ve added another term to the loss function to penalize complexity.â đ¤  I think this is referring to the literal addition of the complexity term in the calculation to find a weight ( `minimize(loss(data|model)) + complexity(model)`).

# MLCC: Feature crosses

I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Feature Crosses” section.

âFeature crossâ, âfeature cross productâ and âsynthetic featureâ are synonymous. A feature cross is the cross product of two features. The nonlinearity sub-section states âThe term cross comes from cross product.â Thinking of it as a Cartessian product, which the glossary mentions, helps me grok whatâs going on, and why itâs helpful for the example problem where examples are clustered by quarter (to consider x-y pairs), and esp the exercise involving latitude and longitude pairs.

The video states âLinear learners use linear modelsâ. What is a âlinear modelâ? Given âmodelâ is synonymous with âequationâ or âfunctionâ, a âlinear modelâ is a linear equation. For example, Brilliantâs wiki states: âA linear model is an equation …â What is a âlinear learnerâ? The video might just be stating a fact: something that learns using a linear model is a âlinear learnerâ. For example, Amazon SageMakerâs Linear Learner docs states âThe algorithm learns a linear functionâ.

A âlinear problemâ describes a relationship that can be expressed using a straight line (to divide the input data). âNonlinear problemsâ cannot be expressed this way.

While trying to figure out why the exercise used an indicator_column, I found some nice TensorFlow tutorials, eg for feature crosses. In retrospect, I see the indicator_column docs state simply âRepresents multi-hot representation of given categorical column.â

# MLCC: Representation

I am working through Google’s Machine Learning Crash Course. The notes in this post cover the âRepresentationâ section.

feature engineering is another topic which doesnât seem to merit any review papers or books, or even chapters in books, but it is absolutely vital to ML success. [âŚ] Much of the success of machine learning is actually success in engineering features that a learner can understand.

Scott Locklin, in âNeglected machine learning ideasâ AQI Machine Learning Masteryâs feature engineering overview

Iâve heard 80% of data science is cleaning. This section introduces a nuance: cleaning includes a step mapping raw data into a format that’s appropriate and efficient for inputting into a model. The âscrubbingâ sub-section actually seems like the only thing that fits what I previously thought of as âcleaningâ, eg removing human errors, addressing incomplete data, etc.

The whole section has good recommendations I can see serving as an ongoing reference. For example:

• Good feature values should appear more than 5 or so times in a data set âŚ avoid unique IDs
• Keep data pure by not encoding exceptional states into a featureâs value type, eg an integer feature where -1 means undefined, aka âmagicâ values. Instead, use boolean flags for exceptional states.

The “Z score” scales values as follows: `scaled = (value - mean) stdev`. Math is Fun has a good explanation for how to derive the standard deviation, but Pandas also provides it trivially in the output from `describe`.

âBinningâ seems similar to *-hot encoding in that weâre enabling weights for each value, although the former concerns continuous values and the latter concerns discrete values. The feature cross video supports this by referring to both in the same context.

Histograms and stats, like those output by `describe`, can help detect bad data.

# MLCC: Generalization

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [1] through [3].

The fundamental tension of machine learning is between fitting our data well, but also fitting the data as simply as possible.

A reasonable guideline: âThe less complex an ML model, the more likely that a good empirical result is not just due to the peculiarities of the sample.â

[2] recommends a best-practice: divide labeled examples into âtrainingâ and âtestâ sets.

Never train on test data! 100% accuracy can be a symptom of that.

[3] goes further: divide labeled examples into three sets: “training”, âvalidationâ and “test”. Simply testing against a “test” set risks overfitting to that set. Instead, iterate against the validation set, and then double-check using the test set.

A continuing impression is that TensorFlow builds in a lot of the best-practices described in this crash course. For example, splitting out a validation set and testing against it is a first-class argument to the Model.fit method.

The exercise associated with [3] is interesting. First, testing against a validation set caught a bug! Second, the bug was a default sort on the latitude column; the validation set was not a random sample.

# MLCC: First Steps with TensorFlow

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [2].

[2] introduces Colab, NumPy, Pandas and TensorFlow.

Colab is like a hosted Jupyter notebook and provides an easy way to play with Python ML libraries, among other things.

NumPy provides performant and user-friendly collections and operations for linear algebra.

Pandas provides tools for working with âdataframesâ, which are like spreadsheets in memory.

I like building on my understanding. In this context, I want to learn Colab and NumPy by using them to work with the cricket chirp data introduced in [1].

[1] used cricket chirps per minute per temperature as an example, but didnât provide raw data. Dolbearâs Law provides an equation we can use to generate data: TC = 10 + (N60 – 40) / 7 â N60 = 7 * TC – 30

Colab and NumPy provide an easy way to use this equation:

```import numpy as np

# Starts by generating temp, since chirps are dependent on temp.
# Starts at 5 because Dolbearâs formula results in a negative value below 5 degrees
temps = np.arange(5,36)

# Adds noise to avoid an obviously linear relationship.
# Copies the approach from âNumPy UltraQuick Tutorialâ linked from [2].
# Sets low of -5, which limits the minimum chirps to zero.
noise = np.random.randint(low=-5, high=5, size=36)
chirps = 7 * temps - 30 + noise

# Prints CSVs, since Google Sheets knows how to split CSVs on paste.
print(','.join([str(i) for i in temps]))
print(','.join([str(i) for i in chirps]))
```

Example chirps per minute:

`7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215`

Note this generates synthetic data for chirps per minute, but then Iâll use them to predict temperature, ie chirps is the feature and temperature is the label.

Copy the temps and chirps CSVs. In Sheets, Edit > paste special > paste comma-separated text (CSV) as columns.

To improve readability, cut the pasted content and Edit > paste special > paste transposed to convert row data to column data.

Select âScatter chartâ for the chart type. Under Customize > Series, check the trendline box. Select âEquationâ for the label to get the regression equation. Check the R2 box.

We can also use the SLOPE and INTERCEPT methods to calculate the equation.

Slope, intercept and R2, respectively, given the example chirps per minute from above:

• 0.144
• 4.323
• 0.999

Unfortunately, Sheets doesnât have MSE, which I learned about in [1], which leads me to wonder, âWhatâs the relationship between R2 and MSE?â Per [3], weâre better off with MSE.

## Digression into SciKit

[2] introduces Pandas after NumPy, but continuing the theme of building on understanding, Iâd like to perform a linear regression in Colab, rather than copy-pasting into Sheets. Iâll follow [4] and [5] and defer Pandas until I need it for TensorFlow.

```import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score

actual_temps = np.arange(5,36)
chirps = np.array([7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215])

model = linear_model.LinearRegression()
model.fit(chirps[:, np.newaxis], actual_temps)

predicted_temps = model.predict(chirps[:, np.newaxis])

plt.scatter(chirps, actual_temps)
plt.plot(chirps, predicted_temps)

# Starts the y-axis at zero, even though the data starts at 5
plt.ylim(0)

print('Slope: %.3f' % model.coef_)
print('Intercept: %.3f' % model.intercept_)
print('MSE: %.3f' % mean_squared_error(actual_temps, predicted_temps))
print('R2: %.3f' % r2_score(actual_temps, predicted_temps))

```

Slope, intercept, MSE and R2, respectively:

• 0.144
• 4.323
• 0.085
• 0.999

Note SciKit can calculate MSE and R2. Perhaps in line with [3], note MSE is non-zero, but R2 close to 100% đ¤

As expected, Sheets is great for common stuff, but Colab/Jupyter shines for arbitrary calculation.

## TensorFlow

Coincidentally, TensorFlow’s fifth birthday was just a couple days ago đĽł

Continuing the theme of building on experience, Iâm using the cricket chirp data for the synthetic exercise:

```my_feature = ([float(i) for i in [7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215]])
my_label   = ([float(i) for i in range(5,36)])
```

The following settings enabled the cricket chirp data to converge with an RMSE ~ 0.8, which seems like a sweet spot of accuracy vs training time:

• Learning: 0.01
• Epochs: 50
• Batch size: 1

Decreasing the learning rate (eg 0.001) and increasing the epochs (eg 500) converges with an RMSE ~0.5, but takes forever. Increasing the batch increases choppiness of the error tail.

The summary at the bottom of the synthetic data exercise seems generally useful:

• “Training loss should steadily decrease, steeply at first, and then more slowly until the slope of the curve reaches or approaches zero.
• If the training loss does not converge, train for more epochs.
• If the training loss decreases too slowly, increase the learning rate. Note that setting the learning rate too high may also prevent training loss from converging.
• If the training loss varies wildly (that is, the training loss jumps around), decrease the learning rate.
• Lowering the learning rate while increasing the number of epochs or the batch size is often a good combination.
• Setting the batch size to a very small batch number can also cause instability. First, try large batch size values. Then, decrease the batch size until you see degradation.
• For real-world datasets consisting of a very large number of examples, the entire dataset might not fit into memory. In such cases, you’ll need to reduce the batch size to enable a batch to fit into memory.”

For the real data, thereâs a note about the âmaxâ being anomalous relative to the different percentiles, which makes sense, but is a little abstract. The plot does a good job showing outliers.

Interesting that the RMSE for the real data is ~100, rather than the zero I was going for with the synthetic data. I guess the point is that weâre trying to minimize loss, rather than eliminate it.

[2] uses California housing data, but we can browse other datasets at https://datasetsearch.research.google.com/.

Great tip to use corr to see which features correlate with a label, as an alternative to trial and error hyperparameter tuning.

## References

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [2].

Earlier, I explored simplistic linear regression, largely based on [1]. The next section of the crash course ([2]) dives into âgradient descentâ (GD), which raises the question âWhatâs wrong with the linear regression we just learned?â In short, the technique we just learned, Ordinary Least Squares (OLS), does not scale.

[3] clarifies linear regression can take a few forms depending input and processing constraints. Among these forms, OLS concerns one or more inputs where âall of the data must be available and you must have enough memory to fit the data and perform matrix operationsâ and uses least squares to find the best line. GD concerns âa very large dataset either in the number of rows or the number of columns that may not fit into memory.â As described by [4], OLS doesnât scale. GD scales by finding a ânumerical approximation âŚ by iterative methodâ.

[2] introduces GD by descending a parabola, but itâs unclear how we transitioned from talking about straight lines in [1] to parabolas. The distinction is that weâre now focusing on loss functions. (To be fair, in retrospect, the title is “Reducing loss”đ¤Śââď¸) [2] asserts âFor the kind of regression problems we’ve been examining, the resulting plot of loss vs. w1 will always be convexâ, ie a parabola. OLS takes all the data and computes an optimal line, but GD iteratively generates lines and determines whether one is optimal by comparing the loss to the previous iteration.

[1] introduced the idea of quantifying the accuracy of a regression by calculating the loss. For example, it mentioned Mean Squared Error as a common loss function. [5] clarifies that Mean Squared Error is an exponential function. This provides helpful context for [2]âs definition of âgradientâ as the derivative of the loss function.

I like the summary statement from [5]

The goal of any Machine Learning Algorithm is to minimize the Cost Function

[5] uses the interactive exercise from [2]. Itâs reassuring to see convergence đ

[4] presents a good example of a team trying to find the highest peak in a mountainous area by parachuting randomly over the range and reporting their local max daily. I can see how that would scale well for a large data set. Reminds me of MapReduce.

This example is a bit counter-intuitive, though, in that GD is trying to find a minimum (loss) rather than a maximum. Itâd be better phrased as trying to find the deepest valley. Anyway, it states âOur aim is to reach the minima which is the valley bottom. So our gradient should be negative always âŚ So if at our initial weights, the slope is negative, we are in the right directionâ, which explains the âdescentâ in âgradient descentâ.

[4] (like [2]) describes three forms of GD:

1. Batch
2. Stochastic
3. Mini Batch

[2] defines âa batchâ as âthe total number of examples you use to calculate the gradient in a single iteration.â Presumably, itâs referring to Batch GD when it says âSo far, we’ve assumed that the batch has been the entire data set.â

[2] describes Stochastic as picking one example at random for each iteration, which would take forever and may operate on redundant data, which is common in large data sets.

[2] states Mini Batch âreduces the amount of noise in SGD but is still more efficient than full-batchâ because it uses batches of 10-1000 random examples, and that Mini Batch is whatâs used in practice.

When do we stop iterating? [2] states âyou iterate until overall loss stops changing or at least changes extremely slowly. When that happens, we say that the model has converged.â

To summarize:

1. Initialize with arbitrary weights
2. Generate a model
3. Sample (labeled) examples
4. Input sample into the model
5. Calculate the loss
6. Compare the new loss with the previous loss
7. If loss is decreasing
1. Add the step value to the weight
2. Repeat from step 2

# MLCC: Linear regression

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [1] and [2].

A lot of ML quickstarts dive right into jargon like model, feature, y’, L2, etc, which makes it hard for me to learn the basics – âwhat are we doing and why?â

The crash course also presents some jargon, but at least explains each concept and links to a glossary, which makes it easier to learn.

After a few days of poking around, one piece of jargon seems irreducible: linear regression. In other words, this is the kind of basic ML concept Iâve been looking for. This is where Iâd start if I was helping someone learn ML.

I probably learned about linear regression in the one statistics class I took in college, but have forgotten about it after years of string parsing đ

The glossary entry for linear regression describes it as âUsing the raw output (yâ) of a linear model as the actual prediction in a regression modelâ, which is still too dense for me.

The linear regression module of the crash course is closer to my level:

Linear regression is a method for finding the straight line âŚ that best fits a set of points.

The crash course provides a good example of a line fitting points describing cricket chirps per minute per temperature:

The âlinearâ in âlinear regressionâ refers to this straight line, as in linear equation. The “regression” refers to “regression to the mean”, which is a statistical observation unfortunately unrelated to statistical methods like the least squares technique described below, as explained humorously by John Seymour.

Math is Fun describes a technique called âleast squares regressionâ for finding such a line. Googleâs glossary also has an entry for least squares regression, which gives me confidence that Iâm bridging my level (Math is Fun) with the novel concept of ML.

Helpful tip from StatQuestâs âMachine Learning Fundamentals: Bias and Varianceâ: differences are squared so that negative distances donât cancel out positive distances.

Math is Funâs article on linear equations and the crash courseâs video on linear regression reminded me of the slope-intercept form of a linear equation I learned about way back when: `y = mx + b`.

The crash course even describes this equation as a âmodelâ: âBy convention in machine learning, you’ll write the equation for a model slightly differently …â

All this helps me understand in the most basic sense:

• A âmodelâ is just an equation
• âTrainingâ and âlearningâ are just performing a regression calculation to generate an equation
• Performing these calculations regularly and on large data sets is tedious and error prone, so we use a computer, hence âmachine learningâ
• âPredictionâ and âinferenceâ are just plugging x values into the equation