Introducing MLOps by Mark Treveil, et al, provides a thorough, but relatively non-technical, enterprise-level introduction to MLOps. I, being at a big company and new to ML, found this book helpful for developing a big picture for how to build and maintain ML infrastructure.
The Data Pipeline Pocket Reference by James Densmore is a practical overview of pipeline concepts and terminology. It demonstrates most concepts using framework-agnostic Python scripts. It also provides a good introduction to MLOps by recommending popular solutions to common problems, like Apache Airflow for orchestration. I’d recommend it to anyone ramping up on MLOps.
I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Neural Networks” module.
Does “deep learning” imply neural networks?
The introductory video refers to “deep neural networks”, so I’m wondering what the relationship is between deep learning and neural networks.
“To give you some context, modern Convolutional Networks contain on orders of 100 million parameters and are usually made up of approximately 10-20 layers (hence deep learning)” – https://cs231n.github.io/neural-networks-1/
“Deep Learning is simply a subset of the architectures (or templates) that employs ‘neural networks’” – https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df (TDS)
“Deep learning” in Google’s glossary links to “deep model”: “A type of neural network containing multiple hidden layers.”
“However, until 2006 we didn’t know how to train neural networks to surpass more traditional approaches, except for a few specialized problems. What changed in 2006 was the discovery of techniques for learning in so-called deep neural networks.” – http://neuralnetworksanddeeplearning.com/about.html
Toward’s Data Science’s “Intuitive Deep Learning Part 1a: Introduction to Neural Networks” clarifies “deep learning” is a subset of machine learning. I guess they’re both “learning”. I like the comparison of an algorithm to a recipe, and in this context, ML optimizes a recipe. Deep learning is a subset of optimization techniques.
When to use neural networks?
Small data with linear relationships → LSR
Large data with linear relationships → gradient descent
Large data with simple, nonlinear relationships → feature crosses
Large data with complex, nonlinear relationships → NN
“Neural nets will give us a way to learn nonlinear models without the use of explicit feature crosses” – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
“Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data” – http://neuralnetworksanddeeplearning.com/index.html
NN “have the flexibility to model many complicated relationships between input and output”- https://towardsdatascience.com/intuitive-deep-learning-part-1a-introduction-to-neural-networks-aaeb3a1500df
“That’s not to say that neural networks aren’t good at solving simpler problems. They are. But so are many other algorithms. The complexity, resource-intensiveness and lack of interpretability in neural networks is sometimes a necessary evil, but it’s only warranted when simpler methods are inapplicable” – https://www.quora.com/What-kinds-of-machine-learning-problems-are-neural-networks-particularly-good-at-solving
Why are there multiple layers?
“each layer is effectively learning a more complex, higher-level function over the raw inputs” – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy
“A single-layer neural network can only be used to represent linearly separable functions … Most problems that we are interested in solving are not linearly separable.” – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
The universal approximation theory states that one hidden layer is sufficient for any problem – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
“How many hidden layers? Well if your data is linearly separable (which you often know by the time you begin coding a NN) then you don’t need any hidden layers at all. Of course, you don’t need an NN to resolve your data either, but it will still do the job.” – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
“One hidden layer is sufficient for the large majority of problems.” – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
“Even for those functions that can be learned via a sufficiently large one-hidden-layer MLP, it can be more efficient to learn it with two (or more) hidden layers” – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
“Multi-layer” implies at least one hidden layer: “It has an input layer that connects to the input variables, one or more hidden layers” – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
Chris Olah’s “Neural Networks, Manifolds and Topology”, linked from the crash course, visualizes how data sets intersecting in n dimensions may be disjoint in n + 1 dimensions, which enables a linear solution. Other than that, though, Olah’s article was over my head. Articles like TDS are more my speed.
Why are some layers called “hidden”?
“The interior layers are sometimes called “hidden layers” because they are not directly observable from the systems inputs and outputs.” – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
How many layers do I need?
Task 4 in the exercise recommends playing around with the hyperparameters to get a certain loss, but the combinatorial complexity makes me wonder if there’s an intuitive way to think about the role of layers and neurons. 🤔
“Regardless of the heuristics you might encounter, all answers will come back to the need for careful experimentation to see what works best for your specific dataset” – https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
“In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.” – https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
“3 neurons are enough because the XOR function can be expressed as a combination of 3 half-planes (ReLU activation)” – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises Seems narrowing the problem space to ReLU enables some deterministic optimization.
“The sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers due to the vanishing gradient problem” – https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
“use as big of a neural network as your computational budget allows, and use other regularization techniques to control overfitting” – https://cs231n.github.io/neural-networks-1/#arch
“a model with 1 neuron in the first hidden layer cannot learn a good model no matter how deep it is. This is because the output of the first layer only varies along one dimension (usually a diagonal line), which isn’t enough to model this data set well” – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
“A single layer with more than 3 neurons has more redundancy, and thus is more likely to converge to a good model” – https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/playground-exercises
Two hidden layers with eight neurons in the first and two in the second performed well (~0.15 loss) on repeated runs.
Heuristics from spiral solution video:
- Tune number of layers and nodes. Max neurons in the first layer, tapering down a couple layers to the output is a reasonable start. Each neuron takes time to train, though, so reduce total neurons if training is too slow. This is reinforced by the practice exercise, which started with two layers of 20 and 12 neurons, and then tried to reduce the number of neurons while keeping loss stable.
- Reduce the learning rate to smooth loss curve
- Add regularization to further smooth loss curve
- Feature engineering helps with noisy data
- Try different activation functions. Ultimately, tanh had the best fit
- Iterate from 1
Even after all this, tuning hyper parameters still seems combinatorially complex.
A neural net consists of layers. Nodes in the bottom layer are linear equations. Nodes in a “hidden” layer transform a linear node into a non-linear node using an “activation function”. The crash course states “any mathematical function can serve as an activation function”.
A sigmoid is an example of an activation function. I remember from the module on logistic regression (notes) that we used a sigmoid to transform a linear equation into a probability.
Why is it called a “neuron”?
The glossary definition for “neuron” is pretty good: 1) “taking in multiple input values and generating one output value”, and 2) ”The neuron calculates the output value by applying an activation function.” Aside: this reminds me of lambda architecture. I appreciate TDS clarifying neurons “often take some linear combination of the inputs”, like w1x1 + w2x2 + w3x3. I suppose this is what the glossary means by “a weighted sum of input values”.
TDS references a single image from the biological motivations section of Stanford’s CS231n, but I find both the images from that section useful for comparison.
I like TDS’ definition of a “layer” as “a “neural network” is simply made out of layers of neurons, connected in a way that the input of one layer of neuron is the output of the previous layer of neurons”. In that context, the hidden layer diagrams from the crash course makes sense.
Peter Norvig summarized the value of ML from a software engineering perspective in his “Introduction to Machine Learning” for Google’s Machine Learning Crash Course:
First, it gives you a tool to reduce the time you spend programming … Second, it will allow you to customize your products, making them better for specific groups of people … And third, machine learning lets you solve problems that you, as a programmer, have no idea how to do by hand.
From my perspective, the first two can be rephrased as:
- Models add a new dimension to code reuse
- For a class of problems, training models scales better than hand-writing code
There’s also a fourth point linked from the bottom of the intro:
Rule #1: Don’t be afraid to launch a product without machine learning
That fourth point reminds me of the “build” vs “grow” domains – until we’ve built a product that lots of people find useful, statistics-based growth tools, like large-scale AB testing, can be relatively high-cost, low-value.We might even say such optimizations only make sense once we have more users than can be efficiently contacted directly. Put another way, if we only have one user, and she says she only wants to see articles about sports, we don’t need ML to predict her interests.
I think about these four points a lot, almost like a koan. They provide a helpful anchor as I try to distill a large amount of theory into tools I can apply to the problems I’m familiar with.
Craft Beer and Brewing’s article “Beautiful Future: How Deschutes Uses Artificial Intelligence & Machine Learning to Brew Better Beer” describes an intuitive application of ML. Deschutes brewery wanted to more accurately predict when a given fermentation process would complete. The problem statement is simple:
Produce the same amount of beer in less time, while maintaining or improving the quality of the beer along the way, and you’ll have more resources for the intentional play that leads to new beers that drinkers love.
I like the explicit recognition that reducing toil frees time for more valuable activities. This is reiterated later:
Most beer consumers aren’t concerned with how efficiently or cost-effectively a brewery makes their beer—they want high-quality beer, and they want new and exciting beers.
Fermentation sounds like a relatively simple curve to plot. It’s easy to imagine manually monitoring something like sugar content vs time, and then using that data to train a model.
Brewers now trust automation to act on the predictions:
Today, cellar operators at Deschutes have such a high level of confidence in the algorithm that they typically allow the software to trigger next steps in the brewing process.
The automation is also easy to imagine. Deschutes’ Brewery Pi project targets Raspberry Pi, which I can see being used to drive hardware to adjust temperature, add nutrients, drain a fermentation vessel, etc. I really like how Deschutes made the code open-source 🍻
The NY Times has a series of articles exploring machine learning and artificial intelligence.
I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Regularization for Sparsity” module.
Best-practice: if you’re overfitting, you want to regularize.
- “A problem is sparse if each constraint function depends on only a small number of the variables”
- “Like least-squares or linear programming, there are very effective algorithms that can reliably and efficiently solve even large convex problems”, which would explain why gradient descent is a tool we use
- Regularization is when “extra terms are added to the cost function”
- “If the problem is sparse, or has some other exploitable structure, we can often solve problems with tens or hundreds of thousands of variables and constraint”, so it would seem performance is another motivation for regularization
Ideally, we could perform L0 normalization, but that’s non-convex, and so, NP-hard (slide 7). (I like Math is Fun’s NP-complete page🙂 As noted wrt gradient descent, we need a convex loss curve to optimize. L1 approximates L0 and is easy to compute.
Quora provides a couple intuitive explanations for L1 and L2 norms: “L2 norm there yields Euclidean distance … The L1 norm gives rise to what can be referred to as the “taxi-cab” distance”
Rorasa’s blog states “Norm may come in many forms and many names, including these popular name: Euclidean distance, Mean-squared Error, etc … Because the lack of l0-norm’s mathematical representation, l0-minimisation is regarded by computer scientist as an NP-hard problem, simply says that it’s too complex and almost impossible to solve. In many case, l0-minimisation problem is relaxed to be higher-order norm problem such as l1-minimisation and l2-minimisation.”
The glossary summarizes:
- L1 regularization “penalizes weights in proportion to the sum of the absolute values of the weights. In models relying on sparse features, L1 regularization helps drive the weights of irrelevant or barely relevant features to exactly 0”
- L2 regularization “penalizes weights in proportion to the sum of the squares of the weights. L2 regularization helps drive outlier weights (those with high positive or low negative values) closer to 0 but not quite to 0”
I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Logistic Regression” module.
“Logistic regression” generates a probability (a value between 0 and 1). It’s also very efficient.
Note the glossary defines logistic regression as a classification model, which is weird since it has “regression” in the name. I suspect this is explained by “You can interpret the value between 0 and 1 in either of the following two ways: … a binary classification problem … As a value to be compared against a classification threshold …”
Note the sigmoid function is just
y = 1 / 1 + e ^ - 𝞼 where 𝞼 is our usual linear equation. I suppose we’re transforming the linear output into a logistic form.
Regularization (notes) is important in logistic regression. “Without regularization, the asymptotic nature of logistic regression would keep driving loss towards 0 in high dimensions”, esp L2 regularization and stopping early.
The loss function for logistic regression is “log loss”.
I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Classification” module.
New metrics for evaluating classification performance:
“Accuracy” simply measures percentage of correct predictions.
It fails on class-imbalance, aka “skewed class”, problems, though. Neptune AI states is bluntly: “You shouldn’t use accuracy on imbalanced problems.” Heuristic: is the percent accuracy > the imbalance? For example, if a population is 99% disease-free, an accuracy of 99% requires no intelligence. This is called the “accuracy paradox”. Precision and recall are better suited to class-imbalance problems.
Tip: calculate odds independently if possible to compare with accuracy.
A “confusion matrix”, aka “classification matrix”, quantifies predicted vs actual outcomes, which is useful for evaluating model performance.
A false positive is a “type one” error. A false negative is a “type two” error. When the cost of error is high, type two must be minimized. In other words, when the cost of error is high, maximize recall.
Precision and recall
Andrew Ng’s “Lecture 11.4 — Machine Learning System Design | Trading Off Precision And Recall” provides a helpful phrasing:
- Precision = true positive / predicted positive
- Recall = true positive / actual positive
Regarding the accuracy paradox, if a model simply predicts negative all the time (eg because 99% of email isn’t spam), it will fail recall and precision because it never has a true positive.
Wikipedia makes a point: “It is trivial to achieve recall of 100% by returning all documents in response to any query”
Precision and recall are important, and in tension. Classification depends on a “threshold”. Increasing the threshold increases precision, but decreases recall. Wikipedia uses surgery for a brain tumor to illustrate: a conservative approach increases the risk of false negative; an aggressive approach increases risk of false positive. Plotting the “precision-recall curve” can also help demonstrate the relationship, as demonstrated by Andrew Ng.
Wikipedia has a nice visualization differentiating precision and recall:
ROC and AUC
The “ROC curve” helps identify the best threshold.
“AUC” compares ROCs, helping identify the best model.
StatQuest’s “ROC and AUC, Clearly Explained!” states precision is a better metric than the false positive rate for class imbalance problems because it doesn’t take true negatives into account.
Keras gives us AUC for a model, but what’s the corresponding threshold? The crash course clarifies: “AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.” Ok, then why use anything but AUC? Neptune AI summarizes: “… use it when you care equally about positive and negative classes.”
Seems like this is another way of quantifying model performance. If we know a probability of occurrence and the model produces a significantly different probability, that indicates something’s amiss.
The formal definition is: average predicted occurrence – average actual occurrence. There’s a helpful note that a model simply returning the average occurrence would have zero prediction bias, but would still be a bad model.
The crash course gives a few causes for bias. StatQuest’s “Machine Learning Fundamentals: Bias and Variance” adds another: the inability of a ML algorithm to capture the true relationship between features and labels, eg linear regression trying to capture a curved relationship.
Fix prediction bias in the model, rather than adjusting the model output.
Interesting clarification that predicted values are a probability range, but actual values are discrete, so we need to segment values and average them to make a comparison.
I am working through Google’s Machine Learning Crash Course. The notes in this post cover the “Regularization” module.
When training loss is less than validation loss, we’re “overfitting” to the training data, reducing generalization.
“Structural risk minimization” refers to regularization by minimizing the complexity of the model.
The “L2 regularization” formula quantifies complexity as the sum of the squares of the feature weights.
“Lambda” aka “regularization rate” governs the amount of regularization applied. Increasing lambda strengthens regularization, resulting in a steeper histogram of weights, for example. A tool called Vizier can help optimize lambda.
Helpful phrasing from StatQuest’s “Machine Learning Fundamentals: Bias and Variance”: regularization is one technique for finding a balance between a simple model (that may have high bias) and a complex model (that may have high variability).
The answer for task 1 in the first exercise, notes the “relative weight” of lines from FEATURE to OUTPUT in the playground. What is “relative weight”? 🤔 Later, the second exercise mentions “The relative thickness of each line running from FEATURES to OUTPUT represents the learned weight for that feature or feature cross. You can find the exact weight values by hovering over each line.” So, “relative weight” in this context is just referring to the weight of one line relative to another, rather than a novel concept.
The answer for task 1 states: “The lines emanating from X1 and X2 are much thicker than those coming from the feature crosses. So, the feature crosses are contributing far less to the model than the normal (uncrossed) features.” Task 2 states “If we use a model that is too complicated, such as one with too many crosses …” Later, we learn “If model complexity is a function of weights …” Is complexity a function of crosses or weights? 🤔 I guess the idea is that the additional complexity of the crosses was driving up the weight of the uncrossed features, irrespective of regularization. Running the playground with and without the cross supports this, eg ~1.5, 0.131 and 0.033, respectively, vs ~0.9 with losses 0.096 and 0.039. Running with the cross and 0.3 regularization results in ~0.3, 0.092 and 0.059. Running with just 0.3 regularization results in ~0.3, 0.093 and 0.061. So it would seem there are at least a couple, orthogonal components to “complexity”.
An answer in the playground mentions: “While test loss decreases, training loss actually increases. This is expected, because you’ve added another term to the loss function to penalize complexity.” 🤔 I think this is referring to the literal addition of the complexity term in the calculation to find a weight (
minimize(loss(data|model)) + complexity(model)).