Lesson 01 - Neurons
\[\textrm{Ann shows L'earnest that his} \\ \textrm{ neurons aren't so impressive.}\]- Neural networks don’t have neurons
- We say ‘inputs’ and ‘outputs’ — not neurons
- Neural networks don’t need vectors
Neural networks don’t have neurons
People in the machine learning industry love to throw around the word neuron.
And why shouldn’t they? It’s a great word. It brings to mind discoveries in neuroscience and futuristic AI that can think and feel. However, the use of it in machine learning is redundant. Neural network documentation makes more sense if you remove the word neuron entirely.
- For example:
- “I’m feeding in two 3-dimensional input neurons”
- Can easily become:
- “I’m feeding in two 3-dimensional inputs”
(This applies to general purpose machine learning - not for models of neurons)
Neural networks don’t need vectors
\[\textrm{A line can measure spookiness,} \\ \textrm{ but one must master the square} \\ \textrm{ to list spookiness and freshness.}\]Now let’s look at the dimensions of the inputs.
In many entry-level machine learning tutorials, they usually don’t explain what they mean by a N-dimensional vector space
(Looking at you, tensorflow “beginner” tutorial). It’s simple.
As in the spooky-fresh comic above, think of a vector space
as a shape. An N-dimensional
vector space is just a line for N=1
, a square for N=2
, a cube for N=3
, or a hypercube for N>3
. It’s best never to speak of hypercubes. Let’s focus on lines and squares.
Vectors explained with memes
A vector is a list of directions in a space.
- In the case of a line (1-dimensional space)
- A
vector
like[4]
is offset from[0]
by:- four steps along the line from zero.
- A
- For a square (2-dimensional space)
- A
vector
like[4,12]
is offset from[0,0]
by- four steps along the first edge.
- twelve steps along the second edge.
- A
But really, A vector is a just a list of numbers.
Each input vector should just be called an input list of numbers, an input list, or simply a list. Each number in the input list just describes an input. You add a dimension just by adding one number to the input list. An N-dimensional input vector is just a list of N numbers.
For the 1-dimensional line, the input list has one number like [0]
. For the 2-dimensional square, the input list has two numbers like [0,0]
.
Here’s a table of both numbers and both lists for each input in the comic.
Icon | spooky | fresh | [spooky] |
[spooky, fresh] |
---|---|---|---|---|
4 | 12 | [4] |
[4,12] |
|
0 | 3 | [0] |
[0,3] |
|
0 | 17 | [0] |
[0,17] |
The lists giving [spooky]
will not answer questions about freshness. You need an input list giving [spooky,fresh]
to teach a network to find the freshest. To know anything about freshness, each input list must have a number for freshness.
- If each input list has two numbers like this:
[spooky, fresh] = [0,0]
, - And two of the input lists are
[0,3]
for and[0,17]
for ,- Then a network can learn that is much fresher than .
In general, you need to list one number in the input list for each thing you know about your data like spookiness, freshness, or even spiciness.
Let’s just call vectors lists
In the first panel, L’earnest describes his 3 dimensional input neuron vector
.
From what we learned about vectors, the inputs are lists of 3 numbers that give all we know about each input. When running a network, a 3 dimensional vector
would just be a list with three numbers like this:
input = [1,2,3]
If these are spooky-fresh-spicy
inputs, the above input list could mean spookiness=1
, freshness=2
, and spiciness=3
.
Or, since the meme has spookiness=4
, freshness=12
, and spiciness=0
,
input = [4,12,0]
There is no need to use a word like dimensional
to talk about the inputs you feed into your neural network.
Helpful Translations:
- L’earnest would say:
- “I’m feeding in two 3-dimensional input neurons.”
- He should have said:
- “I’m feeding in two inputs of 3-number lists.”
- L’earnest would say:
- “I need another dimension to represent my input vector.”
- He should have said:
- “I need to list another number for each input.”