May 13, 2026
Learning Machine Learning Was Easier Than I Thought
I used to think machine learning was an impossible black box until I finally learned the math behind it.
*as a forenote, i am no where CLOSE to being a machine learning expert. i am still new and i have a lot to learn. this post is meant to inspire those afraid to start ml. i am completely aware that the math does actualy get really hard really quick and i just havent gotten there yet"
I spent years thinking machine learning was this impossibly complicated black box. Every time I tried to learn it through Kaggle competitions or tutorials on libraries like scikit-learn, it felt miserable.
The field seemed buried under layers of intimidating math and jargon, and the idea of applying models without understanding what they were actually doing always bothered me.
That was until I finally sucked it up and said:
“Time to do some hard math.”
To my surprise, the math was not hard (see forenote).
When it comes down to it, a perceptron is simple, logistic regression is simple, and even neural networks, at their core, are surprisingly intuitive.
Of course, the deeper you go, the harder the material becomes. I am absolutely not some math prodigy. The hard math is definitely there, but the barrier to entry is much lower than I once thought.
The buzzwords I used to hear but never really understood—things like gradient descent, loss functions, and weights—turned out to be surprisingly approachable once I actually learned the concepts behind them.
Perceptron Example
A perceptron is probably the simplest model that still feels like “machine learning.” At prediction time, it just checks whether the weighted sum of the inputs is positive or negative.
predict(x) {
return Perceptron.dotProduct(this.weights, x) >= 0 ? 1 : -1;
}
Logistic Regression Example
Logistic regression is extremely similar, except instead of outputting a hard positive or negative immediately, it passes the result through a sigmoid function to produce a probability.
predict(x) {
return LogisticRegression.sigmoid(
LogisticRegression.dotProduct(this.weights, x)
) >= 0.5 ? 1 : 0;
}
Softmax Regression Example
Softmax regression extends the same idea to multiple classes. Instead of choosing between two outputs, it calculates probabilities for several possible classes and picks the most likely one.
predict(x) {
const z = this.weights.map(w =>
SoftmaxRegression.dotProduct(w, x)
);
const p = SoftmaxRegression.softmax(z);
return this.classes[p.indexOf(Math.max(...p))];
}
I think a lot of beginners are introduced to machine learning backward.
People immediately recommend learning tools like NumPy, pandas, or scikit-learn. Those libraries matter, but they are not the interesting part. The interesting part is understanding why any of this works in the first place.
As a beginner myself, learning to spam prebuilt models isn't what got me interested. It was the fundamentals underneath the hood that did.
The code becomes much less intimidating when you know what the model is actually doing under the hood.
Also, shout out to StatQuest for carrying me through statistics and machine learning.