# MLCC: Neural Networks

I am working through Googleâ€™s Machine Learning Crash Course. The notes in this post cover the â€śNeural Networksâ€ť module.

## Does â€śdeep learningâ€ť imply neural networks?

The introductory video refers to â€śdeep neural networksâ€ť, so Iâ€™m wondering what the relationship is between deep learning and neural networks.

â€śTo give you some context, modern Convolutional Networks contain on orders of 100 million parameters and are usually made up of approximately 10-20 layers (hence deep learning)â€ť – https://cs231n.github.io/neural-networks-1/

â€śDeep Learning is simply a subset of the architectures (or templates) that employs ‘neural networks’â€ť – https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df (TDS)

â€śDeep learningâ€ť in Google’s glossary links to â€śdeep modelâ€ť: â€śA type of neural network containing multiple hidden layers.â€ť

â€śHowever, until 2006 we didn’t know how to train neural networks to surpass more traditional approaches, except for a few specialized problems. What changed in 2006 was the discovery of techniques for learning in so-called deep neural networks.â€ť – http://neuralnetworksanddeeplearning.com/about.html

Towardâ€™s Data Scienceâ€™s â€śIntuitive Deep Learning Part 1a: Introduction to Neural Networksâ€ť clarifies â€śdeep learningâ€ť is a subset of machine learning. I guess theyâ€™re both â€ślearningâ€ť. I like the comparison of an algorithm to a recipe, and in this context, ML optimizes a recipe. Deep learning is a subset of optimization techniques.

## When to use neural networks?

Small data with linear relationships â†’ LSR

Large data with linear relationships â†’ gradient descent

Large data with simple, nonlinear relationships â†’ feature crosses

Large data with complex, nonlinear relationships â†’ NN

â€śNeural nets will give us a way to learn nonlinear models without the use of explicit feature crossesâ€ť – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises

â€śNeural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational dataâ€ť – http://neuralnetworksanddeeplearning.com/index.html

NN â€śhave the flexibility to model many complicated relationships between input and outputâ€ť- https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df

â€śThatâ€™s not to say that neural networks arenâ€™t good at solving simpler problems. They are. But so are many other algorithms. The complexity, resource-intensiveness and lack of interpretability in neural networks is sometimes a necessary evil, but itâ€™s only warranted when simpler methods are inapplicableâ€ť – https://www.quora.com/What-kinds-of-machine-learning-problems-are-neural-networks-particularly-good-at-solving

## Why are there multiple layers?

â€śeach layer is effectively learning a more complex, higher-level function over the raw inputsâ€ť – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy

â€śA single-layer neural network can only be used to represent linearly separable functions â€¦ Most problems that we are interested in solving are not linearly separable.â€ť – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

The universal approximation theory states that one hidden layer is sufficient for any problem – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

â€śHow many hidden layers? Well if your data is linearly separable (which you often know by the time you begin coding a NN) then you don’t need any hidden layers at all. Of course, you don’t need an NN to resolve your data either, but it will still do the job.â€ť – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

â€śOne hidden layer is sufficient for the large majority of problems.â€ť – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

â€śEven for those functions that can be learned via a sufficiently large one-hidden-layer MLP, it can be more efficient to learn it with two (or more) hidden layersâ€ť – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

â€śMulti-layerâ€ť implies at least one hidden layer: â€śIt has an input layer that connects to the input variables, one or more hidden layersâ€ť – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

Chris Olahâ€™s â€śNeural Networks, Manifolds and Topologyâ€ť, linked from the crash course, visualizes how data sets intersecting in n dimensions may be disjoint in n + 1 dimensions, which enables a linear solution. Other than that, though, Olahâ€™s article was over my head. Articles like TDS are more my speed.

## Why are some layers called â€śhiddenâ€ť?

â€śThe interior layers are sometimes called â€śhidden layersâ€ť because they are not directly observable from the systems inputs and outputs.â€ť – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

## How many layers do I need?

Task 4 in the exercise recommends playing around with the hyperparameters to get a certain loss, but the combinatorial complexity makes me wonder if thereâ€™s an intuitive way to think about the role of layers and neurons. đź¤”

â€śRegardless of the heuristics you might encounter, all answers will come back to the need for careful experimentation to see what works best for your specific datasetâ€ť – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/

â€śIn sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.â€ť – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

â€ś3 neurons are enough because the XOR function can be expressed as a combination of 3 half-planes (ReLU activation)â€ť – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises Seems narrowing the problem space to ReLU enables some deterministic optimization.

â€śThe sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers due to the vanishing gradient problemâ€ť – https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/

â€śuse as big of a neural network as your computational budget allows, and use other regularization techniques to control overfittingâ€ť – https://cs231n.github.io/neural-networks-1/#arch

â€śa model with 1 neuron in the first hidden layer cannot learn a good model no matter how deep it is. This is because the output of the first layer only varies along one dimension (usually a diagonal line), which isn’t enough to model this data set wellâ€ť – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises

â€śA single layer with more than 3 neurons has more redundancy, and thus is more likely to converge to a good modelâ€ť – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises

Two hidden layers with eight neurons in the first and two in the second performed well (~0.15 loss) on repeated runs.

Heuristics from spiral solution video:

1. Tune number of layers and nodes. Max neurons in the first layer, tapering down a couple layers to the output is a reasonable start. Each neuron takes time to train, though, so reduce total neurons if training is too slow. This is reinforced by the practice exercise, which started with two layers of 20 and 12 neurons, and then tried to reduce the number of neurons while keeping loss stable.
2. Reduce the learning rate to smooth loss curve
3. Add regularization to further smooth loss curve
4. Feature engineering helps with noisy data
5. Try different activation functions. Ultimately, tanh had the best fit
6. Iterate from 1

Even after all this, tuning hyper parameters still seems combinatorially complex.

## Activation functions

A neural net consists of layers. Nodes in the bottom layer are linear equations. Nodes in a â€śhiddenâ€ť layer transform a linear node into a non-linear node using an â€śactivation functionâ€ť. The crash course states â€śany mathematical function can serve as an activation functionâ€ť.

A sigmoid is an example of an activation function. I remember from the module on logistic regression (notes) that we used a sigmoid to transform a linear equation into a probability.

## Why is it called a â€śneuronâ€ť?

The glossary definition for â€śneuronâ€ť is pretty good: 1) â€śtaking in multiple input values and generating one output valueâ€ť, and 2) â€ťThe neuron calculates the output value by applying an activation function.â€ť Aside: this reminds me of lambda architecture. I appreciate TDS clarifying neurons â€śoften take some linear combination of the inputsâ€ť, like w1x1 + w2x2 + w3x3. I suppose this is what the glossary means by â€śa weighted sum of input valuesâ€ť.

TDS references a single image from the biological motivations section of Stanfordâ€™s CS231n, but I find both the images from that section useful for comparison.

I like TDS’ definition of a â€ślayerâ€ť as â€śa â€śneural networkâ€ť is simply made out of layers of neurons, connected in a way that the input of one layer of neuron is the output of the previous layer of neuronsâ€ť. In that context, the hidden layer diagrams from the crash course makes sense.

# Norvig’s summary of ML for software engineers

Peter Norvig summarized the value of ML from a software engineering perspective in his “Introduction to Machine Learning” for Google’s Machine Learning Crash Course:

First, it gives you a tool to reduce the time you spend programming … Second, it will allow you to customize your products, making them better for specific groups of people … And third, machine learning lets you solve problems that you, as a programmer, have no idea how to do by hand.

From my perspective, the first two can be rephrased as:

1. Models add a new dimension to code reuse
2. For a class of problems, training models scales better than hand-writing code

There’s also a fourth point linked from the bottom of the intro:

Rule #1: Donâ€™t be afraid to launch a product without machine learning

That fourth point reminds me of the “build” vs “grow” domains – until we’ve built a product that lots of people find useful, statistics-based growth tools, like large-scale AB testing, can be relatively high-cost, low-value.We might even say such optimizations only make sense once we have more users than can be efficiently contacted directly. Put another way, if we only have one user, and she says she only wants to see articles about sports, we don’t need ML to predict her interests.

I think about these four points a lot, almost like a koan. They provide a helpful anchor as I try to distill a large amount of theory into tools I can apply to the problems I’m familiar with.

# MLCC: Regularization for sparsity

I am working through Googleâ€™s Machine Learning Crash Course. The notes in this post cover the â€śRegularization for Sparsityâ€ť module.

Best-practice: if youâ€™re overfitting, you want to regularize.

“Convex Optimization” by Boyd and Vandenberghe, linked from multiple glossary entries, touches on many of the points made by the crash course:

• â€śA problem is sparse if each constraint function depends on only a small number of the variablesâ€ť
• â€śLike least-squares or linear programming, there are very effective algorithms that can reliably and efficiently solve even large convex problemsâ€ť, which would explain why gradient descent is a tool we use
• Regularization is when â€śextra terms are added to the cost functionâ€ť
• “If the problem is sparse, or has some other exploitable structure, we can often solve problems with tens or hundreds of thousands of variables and constraint”, so it would seem performance is another motivation for regularization

Ideally, we could perform L0 normalization, but thatâ€™s non-convex, and so, NP-hard (slide 7). (I like Math is Fun’s NP-complete pageđź™‚ As noted wrt gradient descent, we need a convex loss curve to optimize. L1 approximates L0 and is easy to compute.

Quora provides a couple intuitive explanations for L1 and L2 norms: â€śL2 norm there yields Euclidean distance â€¦ The L1 norm gives rise to what can be referred to as the “taxi-cab” distanceâ€ť

Rorasa’s blog states â€śNorm may come in many forms and many names, including these popular name: Euclidean distance, Mean-squared Error, etc â€¦ Because the lack of l0-normâ€™s mathematical representation, l0-minimisation is regarded by computer scientist as an NP-hard problem, simply says that itâ€™s too complex and almost impossible to solve. In many case, l0-minimisation problem is relaxed to be higher-order norm problem such as l1-minimisation and l2-minimisation.â€ť

The glossary summarizes:

• L1 regularization â€śpenalizes weights in proportion to the sum of the absolute values of the weights. In models relying on sparse features, L1 regularization helps drive the weights of irrelevant or barely relevant features to exactly 0â€ť
• L2 regularization â€śpenalizes weights in proportion to the sum of the squares of the weights. L2 regularization helps drive outlier weights (those with high positive or low negative values) closer to 0 but not quite to 0â€ť

# MLCC: Logistic regression

I am working through Googleâ€™s Machine Learning Crash Course. The notes in this post cover the â€śLogistic Regressionâ€ť module.

â€śLogistic regressionâ€ť generates a probability (a value between 0 and 1). Itâ€™s also very efficient.

Note the glossary defines logistic regression as a classification model, which is weird since it has â€śregressionâ€ť in the name. I suspect this is explained by â€śYou can interpret the value between 0 and 1 in either of the following two ways: â€¦ a binary classification problem â€¦ As a value to be compared against a classification threshold …â€ť

The â€śsigmoidâ€ť function, aka â€ślogisticâ€ť function/transform, produces a bounded value between 0 and 1.

Note the sigmoid function is just `y = 1 / 1 + e ^ - đťžĽ` where đťžĽ is our usual linear equation. I suppose weâ€™re transforming the linear output into a logistic form.

Regularization (notes) is important in logistic regression. â€śWithout regularization, the asymptotic nature of logistic regression would keep driving loss towards 0 in high dimensionsâ€ť, esp L2 regularization and stopping early.

The â€ślogitâ€ť, aka â€ślog-oddsâ€ť, function is the inverse of the logistic function.

The loss function for logistic regression is â€ślog lossâ€ť.

# MLCC: Classification

I amÂ working through Googleâ€™s Machine Learning Crash Course. The notes in this post cover theÂ â€śClassificationâ€ťÂ module.

New metrics for evaluating classification performance:

• Accuracy
• Precision
• Recall
• ROC
• AUC

## Accuracy

“Accuracy” simply measures percentage of correct predictions.

It fails on class-imbalance, aka â€śskewed classâ€ť, problems, though. Neptune AI states is bluntly: â€śYou shouldnâ€™t use accuracy on imbalanced problems.â€ť Heuristic: is the percent accuracy > the imbalance? For example, if a population is 99% disease-free, an accuracy of 99% requires no intelligence. This is called the â€śaccuracy paradoxâ€ť. Precision and recall are better suited to class-imbalance problems.

Tip: calculate odds independently if possible to compare with accuracy.

## Confusion matrix

A â€śconfusion matrixâ€ť, aka â€śclassification matrixâ€ť, quantifies predicted vs actual outcomes, which is useful for evaluating model performance.

A false positive is a â€śtype oneâ€ť error. A false negative is a â€śtype twoâ€ť error. When the cost of error is high, type two must be minimized. In other words, when the cost of error is high, maximize recall.

## Precision and recall

Andrew Ngâ€™s â€śLecture 11.4 â€” Machine Learning System Design | Trading Off Precision And Recallâ€ť provides a helpful phrasing:

• Precision = true positive / predicted positive
• Recall = true positive / actual positive

Regarding the accuracy paradox, if a model simply predicts negative all the time (eg because 99% of email isnâ€™t spam), it will fail recall and precision because it never has a true positive.

Wikipedia makes a point: â€śIt is trivial to achieve recall of 100% by returning all documents in response to any queryâ€ť

Precision and recall are important, and in tension. Classification depends on a â€śthresholdâ€ť. Increasing the threshold increases precision, but decreases recall. Wikipedia uses surgery for a brain tumor to illustrate: a conservative approach increases the risk of false negative; an aggressive approach increases risk of false positive. Plotting the â€śprecision-recall curveâ€ť can also help demonstrate the relationship, as demonstrated by Andrew Ng.

## ROC and AUC

The “ROC curve” helps identify the best threshold.

“AUC” compares ROCs, helping identify the best model.

StatQuestâ€™s â€śROC and AUC, Clearly Explained!â€ť states precision is a better metric than the false positive rate for class imbalance problems because it doesnâ€™t take true negatives into account.

Keras gives us AUC for a model, but whatâ€™s the corresponding threshold? The crash course clarifies: â€śAUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.â€ť Ok, then why use anything but AUC? Neptune AI summarizes: â€ś… use it when you care equally about positive and negative classes.â€ť

## Prediction bias

Seems like this is another way of quantifying model performance. If we know a probability of occurrence and the model produces a significantly different probability, that indicates somethingâ€™s amiss.

The formal definition is: average predicted occurrence – average actual occurrence. Thereâ€™s a helpful note that a model simply returning the average occurrence would have zero prediction bias, but would still be a bad model.

The crash course gives a few causes for bias. StatQuestâ€™s â€śMachine Learning Fundamentals: Bias and Varianceâ€ť adds another: the inability of a ML algorithm to capture the true relationship between features and labels, eg linear regression trying to capture a curved relationship.

Fix prediction bias in the model, rather than adjusting the model output.

Interesting clarification that predicted values are a probability range, but actual values are discrete, so we need to segment values and average them to make a comparison.

# MLCC: Regularization

I amÂ working through Googleâ€™s Machine Learning Crash Course. The notes in this post cover theÂ â€śRegularizationâ€ť module.

An earlier module focused on generalization (notes). A â€śgeneralization curveâ€ť visualizes generalization by showing loss for training data vs loss for validation data.

When training loss is less than validation loss, weâ€™re â€śoverfittingâ€ť to the training data, reducing generalization.

â€śRegularizationâ€ť is the process of preventing overfitting. The TensorFlow docs also discuss regularization.

â€śEmpirical risk minimizationâ€ť refers to loss reduction using tools like gradient descent (notes).

â€śStructural risk minimizationâ€ť refers to regularization by minimizing the complexity of the model.

The â€śL2 regularizationâ€ť formula quantifies complexity as the sum of the squares of the feature weights.

â€śLambdaâ€ť aka â€śregularization rateâ€ť governs the amount of regularization applied. Increasing lambda strengthens regularization, resulting in a steeper histogram of weights, for example. A tool called Vizier can help optimize lambda.

Helpful phrasing from StatQuestâ€™s “Machine Learning Fundamentals: Bias and Variance”: regularization is one technique for finding a balance between a simple model (that may have high bias) and a complex model (that may have high variability).

## Exercise 1

The answer for task 1 in the first exercise, notes the â€śrelative weightâ€ť of lines from FEATURE to OUTPUT in the playground. What is “relative weight”? đź¤” Later, the second exercise mentions â€śThe relative thickness of each line running from FEATURES to OUTPUT represents the learned weight for that feature or feature cross. You can find the exact weight values by hovering over each line.â€ť So, â€śrelative weightâ€ť in this context is just referring to the weight of one line relative to another, rather than a novel concept.

The answer for task 1 states: â€śThe lines emanating from X1 and X2 are much thicker than those coming from the feature crosses. So, the feature crosses are contributing far less to the model than the normal (uncrossed) features.â€ť Task 2 states â€śIf we use a model that is too complicated, such as one with too many crosses …â€ť Later, we learn â€śIf model complexity is a function of weights …â€ť Is complexity a function of crosses or weights? đź¤”  I guess the idea is that the additional complexity of the crosses was driving up the weight of the uncrossed features, irrespective of regularization. Running the playground with and without the cross supports this, eg ~1.5, 0.131 and 0.033, respectively, vs ~0.9 with losses 0.096 and 0.039. Running with the cross and 0.3 regularization results in ~0.3, 0.092 and 0.059. Running with just 0.3 regularization results in ~0.3, 0.093 and 0.061. So it would seem there are at least a couple, orthogonal components to â€ścomplexityâ€ť.

## Exercise 2

An answer in the playground mentions: â€śWhile test loss decreases, training loss actually increases. This is expected, because you’ve added another term to the loss function to penalize complexity.â€ť đź¤”  I think this is referring to the literal addition of the complexity term in the calculation to find a weight ( `minimize(loss(data|model)) + complexity(model)`).

# MLCC: Feature crosses

I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Feature Crosses” section.

â€śFeature crossâ€ť, â€śfeature cross productâ€ť and â€śsynthetic featureâ€ť are synonymous. A feature cross is the cross product of two features. The nonlinearity sub-section states â€śThe term cross comes from cross product.â€ť Thinking of it as a Cartessian product, which the glossary mentions, helps me grok whatâ€™s going on, and why itâ€™s helpful for the example problem where examples are clustered by quarter (to consider x-y pairs), and esp the exercise involving latitude and longitude pairs.

The video states â€śLinear learners use linear modelsâ€ť. What is a â€ślinear modelâ€ť? Given â€śmodelâ€ť is synonymous with â€śequationâ€ť or â€śfunctionâ€ť, a â€ślinear modelâ€ť is a linear equation. For example, Brilliantâ€™s wiki states: â€śA linear model is an equation …â€ť What is a â€ślinear learnerâ€ť? The video might just be stating a fact: something that learns using a linear model is a â€ślinear learnerâ€ť. For example, Amazon SageMakerâ€™s Linear Learner docs states â€śThe algorithm learns a linear functionâ€ť.

A â€ślinear problemâ€ť describes a relationship that can be expressed using a straight line (to divide the input data). â€śNonlinear problemsâ€ť cannot be expressed this way.

While trying to figure out why the exercise used an indicator_column, I found some nice TensorFlow tutorials, eg for feature crosses. In retrospect, I see the indicator_column docs state simply â€śRepresents multi-hot representation of given categorical column.â€ť

# MLCC: Representation

I am working through Google’s Machine Learning Crash Course. The notes in this post cover the â€śRepresentationâ€ť section.

feature engineering is another topic which doesnâ€™t seem to merit any review papers or books, or even chapters in books, but it is absolutely vital to ML success. [â€¦] Much of the success of machine learning is actually success in engineering features that a learner can understand.

Scott Locklin, in â€śNeglected machine learning ideasâ€ť AQI Machine Learning Masteryâ€™s feature engineering overview

Iâ€™ve heard 80% of data science is cleaning. This section introduces a nuance: cleaning includes a step mapping raw data into a format that’s appropriate and efficient for inputting into a model. The â€śscrubbingâ€ť sub-section actually seems like the only thing that fits what I previously thought of as â€ścleaningâ€ť, eg removing human errors, addressing incomplete data, etc.

The whole section has good recommendations I can see serving as an ongoing reference. For example:

• Good feature values should appear more than 5 or so times in a data set â€¦ avoid unique IDs
• Keep data pure by not encoding exceptional states into a featureâ€™s value type, eg an integer feature where -1 means undefined, aka â€śmagicâ€ť values. Instead, use boolean flags for exceptional states.

The “Z score” scales values as follows: `scaled = (value - mean) stdev`. Math is Fun has a good explanation for how to derive the standard deviation, but Pandas also provides it trivially in the output from `describe`.

â€śBinningâ€ť seems similar to *-hot encoding in that weâ€™re enabling weights for each value, although the former concerns continuous values and the latter concerns discrete values. The feature cross video supports this by referring to both in the same context.

Histograms and stats, like those output by `describe`, can help detect bad data.

# MLCC: Generalization

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [1] through [3].

The fundamental tension of machine learning is between fitting our data well, but also fitting the data as simply as possible.

A reasonable guideline: â€śThe less complex an ML model, the more likely that a good empirical result is not just due to the peculiarities of the sample.â€ť

[2] recommends a best-practice: divide labeled examples into â€śtrainingâ€ť and â€śtestâ€ť sets.

Never train on test data! 100% accuracy can be a symptom of that.

[3] goes further: divide labeled examples into three sets: “training”, â€śvalidationâ€ť and “test”. Simply testing against a “test” set risks overfitting to that set. Instead, iterate against the validation set, and then double-check using the test set.

A continuing impression is that TensorFlow builds in a lot of the best-practices described in this crash course. For example, splitting out a validation set and testing against it is a first-class argument to the Model.fit method.

The exercise associated with [3] is interesting. First, testing against a validation set caught a bug! Second, the bug was a default sort on the latitude column; the validation set was not a random sample.

# MLCC: First Steps with TensorFlow

I am working through Google’s Machine Learning Crash Course. The notes in this post cover [2].

[2] introduces Colab, NumPy, Pandas and TensorFlow.

Colab is like a hosted Jupyter notebook and provides an easy way to play with Python ML libraries, among other things.

NumPy provides performant and user-friendly collections and operations for linear algebra.

Pandas provides tools for working with â€śdataframesâ€ť, which are like spreadsheets in memory.

## Digression into Google Sheets

I like building on my understanding. In this context, I want to learn Colab and NumPy by using them to work with the cricket chirp data introduced in [1].

[1] used cricket chirps per minute per temperature as an example, but didnâ€™t provide raw data. Dolbearâ€™s Law provides an equation we can use to generate data: TC = 10 + (N60 – 40) / 7 â†’ N60 = 7 * TC – 30

Colab and NumPy provide an easy way to use this equation:

```import numpy as np

# Starts by generating temp, since chirps are dependent on temp.
# Starts at 5 because Dolbearâ€™s formula results in a negative value below 5 degrees
temps = np.arange(5,36)

# Adds noise to avoid an obviously linear relationship.
# Copies the approach from â€śNumPy UltraQuick Tutorialâ€ś linked from [2].
# Sets low of -5, which limits the minimum chirps to zero.
noise = np.random.randint(low=-5, high=5, size=36)
chirps = 7 * temps - 30 + noise

# Prints CSVs, since Google Sheets knows how to split CSVs on paste.
print(','.join([str(i) for i in temps]))
print(','.join([str(i) for i in chirps]))
```

Example chirps per minute:

`7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215`

Note this generates synthetic data for chirps per minute, but then Iâ€™ll use them to predict temperature, ie chirps is the feature and temperature is the label.

Copy the temps and chirps CSVs. In Sheets, Edit > paste special > paste comma-separated text (CSV) as columns.

To improve readability, cut the pasted content and Edit > paste special > paste transposed to convert row data to column data.

Add column headers, select everything and then Insert > Chart.

Select â€śScatter chartâ€ť for the chart type. Under Customize > Series, check the trendline box. Select â€śEquationâ€ť for the label to get the regression equation. Check the R2 box.

We can also use the SLOPE and INTERCEPT methods to calculate the equation.

Slope, intercept and R2, respectively, given the example chirps per minute from above:

• 0.144
• 4.323
• 0.999

Unfortunately, Sheets doesnâ€™t have MSE, which I learned about in [1], which leads me to wonder, â€śWhatâ€™s the relationship between R2 and MSE?â€ť Per [3], weâ€™re better off with MSE.

## Digression into SciKit

[2] introduces Pandas after NumPy, but continuing the theme of building on understanding, Iâ€™d like to perform a linear regression in Colab, rather than copy-pasting into Sheets. Iâ€™ll follow [4] and [5] and defer Pandas until I need it for TensorFlow.

```import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score

actual_temps = np.arange(5,36)
chirps = np.array([7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215])

model = linear_model.LinearRegression()
model.fit(chirps[:, np.newaxis], actual_temps)

predicted_temps = model.predict(chirps[:, np.newaxis])

plt.scatter(chirps, actual_temps)
plt.plot(chirps, predicted_temps)

# Starts the y-axis at zero, even though the data starts at 5
plt.ylim(0)

print('Slope: %.3f' % model.coef_)
print('Intercept: %.3f' % model.intercept_)
print('MSE: %.3f' % mean_squared_error(actual_temps, predicted_temps))
print('R2: %.3f' % r2_score(actual_temps, predicted_temps))

```

Slope, intercept, MSE and R2, respectively:

• 0.144
• 4.323
• 0.085
• 0.999

Note SciKit can calculate MSE and R2. Perhaps in line with [3], note MSE is non-zero, but R2 close to 100% đź¤”

As expected, Sheets is great for common stuff, but Colab/Jupyter shines for arbitrary calculation.

## TensorFlow

Coincidentally, TensorFlow’s fifth birthday was just a couple days ago đźĄł

Continuing the theme of building on experience, Iâ€™m using the cricket chirp data for the synthetic exercise:

```my_feature = ([float(i) for i in [7,13,15,27,31,38,45,57,57,67,76,85,89,94,100,109,116,120,131,134,144,149,158,165,170,176,187,189,197,208,215]])
my_label   = ([float(i) for i in range(5,36)])
```

The following settings enabled the cricket chirp data to converge with an RMSE ~ 0.8, which seems like a sweet spot of accuracy vs training time:

• Learning: 0.01
• Epochs: 50
• Batch size: 1

Decreasing the learning rate (eg 0.001) and increasing the epochs (eg 500) converges with an RMSE ~0.5, but takes forever. Increasing the batch increases choppiness of the error tail.

The summary at the bottom of the synthetic data exercise seems generally useful:

• “Training loss should steadily decrease, steeply at first, and then more slowly until the slope of the curve reaches or approaches zero.
• If the training loss does not converge, train for more epochs.
• If the training loss decreases too slowly, increase the learning rate. Note that setting the learning rate too high may also prevent training loss from converging.
• If the training loss varies wildly (that is, the training loss jumps around), decrease the learning rate.
• Lowering the learning rate while increasing the number of epochs or the batch size is often a good combination.
• Setting the batch size to a very small batch number can also cause instability. First, try large batch size values. Then, decrease the batch size until you see degradation.
• For real-world datasets consisting of a very large number of examples, the entire dataset might not fit into memory. In such cases, you’ll need to reduce the batch size to enable a batch to fit into memory.”

For the real data, thereâ€™s a note about the â€śmaxâ€ť being anomalous relative to the different percentiles, which makes sense, but is a little abstract. The plot does a good job showing outliers.

Interesting that the RMSE for the real data is ~100, rather than the zero I was going for with the synthetic data. I guess the point is that weâ€™re trying to minimize loss, rather than eliminate it.

[2] uses California housing data, but we can browse other datasets at https://datasetsearch.research.google.com/.

Great tip to use corr to see which features correlate with a label, as an alternative to trial and error hyperparameter tuning.