Machine Learning for humans

Over the course of a few months, I’ve been researching the concepts and principles of machine learning. I’ve been researching neurons, backpropagation, classification, cross entropy, all of it. To a lot of people, including me a few months ago, all of this seems daunting, and insanely complex.

Why is it so complicated?

Machine learning is just another kind of programming. Albeit a cool one, but there’s no rational reason to treat it as some mystical beast that only the chosen ones can tame.

Since I’ve been researching machine learning, a lot of my fellow programmer buddies have been asking me how I understand all this crazy stuff. In actuality, the concepts are relatively simple. They’re just termed incredibly unintuitively. Here’s an example:

Say you walked up to me one random day and asked me what I was working on.

I could look up from my laptop and sneer and say “Oh, I’m training a classification model with two input neurons, a hidden activation layer, and a single output neuron using backpropagation

You might think that’s some fancy shit.

But in reality, that sentence, minus the pretentious terminology is “Oh, I’m teaching a program to take two lists of input, sort it, and guess what kind of thing it is. If it’s wrong, it figures out where it went wrong, and tries again”

And, in reality, thanks to how much machine learning has advanced, I’m probably doing this in about 20 lines of code

Think I’m exaggerating?

In recent years, there’s been a massive boom in machine learning technology.

Thanks to things like Tensorflow, and Keras (Which we’ll be using in most of our sample code and tutorials), the public now has easy and free access to simple interfaces for building effective and usable neural networks.

It’s no longer an academic niche, it’s a common branch of modern programming. Not only that, it’s something that you can teach yourself how to do, with only a standard background in python.

Interested? Check out our first decoded tutorial: Neurons