First, it gives you a tool to reduce the time you spend programming … Second, it will allow you to customize your products, making them better for specific groups of people … And third, machine learning lets you solve problems that you, as a programmer, have no idea how to do by hand.
From my perspective, the first two can be rephrased as:
Models add a new dimension to code reuse
For a class of problems, training models scales better than hand-writing code
There’s also a fourth point linked from the bottom of the intro:
Rule #1: Don’t be afraid to launch a product without machine learning
That fourth point reminds me of the “build” vs “grow” domains – until we’ve built a product that lots of people find useful, statistics-based growth tools, like large-scale AB testing, can be relatively high-cost, low-value.We might even say such optimizations only make sense once we have more users than can be efficiently contacted directly. Put another way, if we only have one user, and she says she only wants to see articles about sports, we don’t need ML to predict her interests.
I think about these four points a lot, almost like a koan. They provide a helpful anchor as I try to distill a large amount of theory into tools I can apply to the problems I’m familiar with.
Produce the same amount of beer in less time, while maintaining or improving the quality of the beer along the way, and you’ll have more resources for the intentional play that leads to new beers that drinkers love.
I like the explicit recognition that reducing toil frees time for more valuable activities. This is reiterated later:
Most beer consumers aren’t concerned with how efficiently or cost-effectively a brewery makes their beer—they want high-quality beer, and they want new and exciting beers.
Fermentation sounds like a relatively simple curve to plot. It’s easy to imagine manually monitoring something like sugar content vs time, and then using that data to train a model.
Brewers now trust automation to act on the predictions:
Today, cellar operators at Deschutes have such a high level of confidence in the algorithm that they typically allow the software to trigger next steps in the brewing process.
The automation is also easy to imagine. Deschutes’ Brewery Pi project targets Raspberry Pi, which I can see being used to drive hardware to adjust temperature, add nutrients, drain a fermentation vessel, etc. I really like how Deschutes made the code open-source 🍻
“A problem is sparse if each constraint function depends on only a small number of the variables”
“Like least-squares or linear programming, there are very effective algorithms that can reliably and efficiently solve even large convex problems”, which would explain why gradient descent is a tool we use
Regularization is when “extra terms are added to the cost function”
“If the problem is sparse, or has some other exploitable structure, we can often solve problems with tens or hundreds of thousands of variables and constraint”, so it would seem performance is another motivation for regularization
Rorasa’s blog states “Norm may come in many forms and many names, including these popular name: Euclidean distance, Mean-squared Error, etc … Because the lack of l0-norm’s mathematical representation, l0-minimisation is regarded by computer scientist as an NP-hard problem, simply says that it’s too complex and almost impossible to solve. In many case, l0-minimisation problem is relaxed to be higher-order norm problem such as l1-minimisation and l2-minimisation.”
L1 regularization “penalizes weights in proportion to the sum of the absolute values of the weights. In models relying on sparse features, L1 regularization helps drive the weights of irrelevant or barely relevant features to exactly 0”
L2 regularization “penalizes weights in proportion to the sum of the squares of the weights. L2 regularization helps drive outlier weights (those with high positive or low negative values) closer to 0 but not quite to 0”
Note the glossary defines logistic regression as a classification model, which is weird since it has “regression” in the name. I suspect this is explained by “You can interpret the value between 0 and 1 in either of the following two ways: … a binary classification problem … As a value to be compared against a classification threshold …”
The “sigmoid” function, aka “logistic” function/transform, produces a bounded value between 0 and 1.
Note the sigmoid function is just y = 1 / 1 + e ^ - 𝞼 where 𝞼 is our usual linear equation. I suppose we’re transforming the linear output into a logistic form.
Regularization (notes) is important in logistic regression. “Without regularization, the asymptotic nature of logistic regression would keep driving loss towards 0 in high dimensions”, esp L2 regularization and stopping early.
New metrics for evaluating classification performance:
“Accuracy” simply measures percentage of correct predictions.
It fails on class-imbalance, aka “skewed class”, problems, though. Neptune AI states is bluntly: “You shouldn’t use accuracy on imbalanced problems.” Heuristic: is the percent accuracy > the imbalance? For example, if a population is 99% disease-free, an accuracy of 99% requires no intelligence. This is called the “accuracy paradox”. Precision and recall are better suited to class-imbalance problems.
Tip: calculate odds independently if possible to compare with accuracy.
A “confusion matrix”, aka “classification matrix”, quantifies predicted vs actual outcomes, which is useful for evaluating model performance.
Regarding the accuracy paradox, if a model simply predicts negative all the time (eg because 99% of email isn’t spam), it will fail recall and precision because it never has a true positive.
Wikipedia makes a point: “It is trivial to achieve recall of 100% by returning all documents in response to any query”
Precision and recall are important, and in tension. Classification depends on a “threshold”. Increasing the threshold increases precision, but decreases recall. Wikipedia uses surgery for a brain tumor to illustrate: a conservative approach increases the risk of false negative; an aggressive approach increases risk of false positive. Plotting the “precision-recall curve” can also help demonstrate the relationship, as demonstrated by Andrew Ng.
Keras gives us AUC for a model, but what’s the corresponding threshold? The crash course clarifies: “AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.” Ok, then why use anything but AUC? Neptune AI summarizes: “… use it when you care equally about positive and negative classes.”
Seems like this is another way of quantifying model performance. If we know a probability of occurrence and the model produces a significantly different probability, that indicates something’s amiss.
The formal definition is: average predicted occurrence – average actual occurrence. There’s a helpful note that a model simply returning the average occurrence would have zero prediction bias, but would still be a bad model.
Cal Newport, who wrote a book I like called Deep Work, recently wrote an article “The Rise and Fall of Getting Things Done” on the decline in popularity of productivity tools, largely due to their inefficacy dealing with a steadily increasing onslaught of said things.
I’ve tried GTD and various todo apps and can relate. Inbox zero still has value for me, though I’ve had more success with regular, aggressive purging than meticulous categorization. The former has also been an effective strategy in general, eg binary prioritization.
I’ve found team autonomy to be an effective goal, and was interested to learn about a history of the term in the workplace:
[Peter] Ducker argued that autonomy would be the central feature of the new corporate world
Newport makes a distinction that autonomy doesn’t mean isolation:
Productivity, we must recognize, can never be entirely personal
I agree. Prolonged isolation is an anti-pattern, but I still think there’s value in reserving time for focus work, focusing on the things we can change, and ensuring folks have what they need to make the changes they’re tasked with.
Newport has a great insight regarding overload being caused by a lack of awareness into other’s time:
Because so much of our effort in the office now unfolds in rapid exchanges of digital messages, it’s convenient to allow our in-boxes to become an informal repository for everything we need to get done. This strategy, however, obscures many of the worst aspects of overload culture. When I don’t know how much is currently on your plate, it’s easy for me to add one more thing […] Consider instead a system that externalizes work [emphasis added]. Following the lead of software developers, we might use virtual task boards, where every task is represented by a card that specifies who is doing the work, and is pinned under a column indicating its status. With a quick glance, you can now ascertain everything going on within your team and ask meaningful questions about how much work any one person should tackle at a time
The article repeatedly reminded me of agile, which is eventually alluded to: “Following the lead of software developers … What if you began each morning with a status meeting in which your team confronts its task board? …”
I like the idea of constraining input to status meetings. It’s a challenge in practice, but worth exploring in principle.
The “L2 regularization” formula quantifies complexity as the sum of the squares of the feature weights.
“Lambda” aka “regularization rate” governs the amount of regularization applied. Increasing lambda strengthens regularization, resulting in a steeper histogram of weights, for example. A tool called Vizier can help optimize lambda.
The answer for task 1 in the first exercise, notes the “relative weight” of lines from FEATURE to OUTPUT in the playground. What is “relative weight”? 🤔 Later, the second exercise mentions “The relative thickness of each line running from FEATURES to OUTPUT represents the learned weight for that feature or feature cross. You can find the exact weight values by hovering over each line.” So, “relative weight” in this context is just referring to the weight of one line relative to another, rather than a novel concept.
The answer for task 1 states: “The lines emanating from X1 and X2 are much thicker than those coming from the feature crosses. So, the feature crosses are contributing far less to the model than the normal (uncrossed) features.” Task 2 states “If we use a model that is too complicated, such as one with too many crosses …” Later, we learn “If model complexity is a function of weights …” Is complexity a function of crosses or weights? 🤔 I guess the idea is that the additional complexity of the crosses was driving up the weight of the uncrossed features, irrespective of regularization. Running the playground with and without the cross supports this, eg ~1.5, 0.131 and 0.033, respectively, vs ~0.9 with losses 0.096 and 0.039. Running with the cross and 0.3 regularization results in ~0.3, 0.092 and 0.059. Running with just 0.3 regularization results in ~0.3, 0.093 and 0.061. So it would seem there are at least a couple, orthogonal components to “complexity”.
An answer in the playground mentions: “While test loss decreases, training loss actually increases. This is expected, because you’ve added another term to the loss function to penalize complexity.” 🤔 I think this is referring to the literal addition of the complexity term in the calculation to find a weight ( minimize(loss(data|model)) + complexity(model)).