Going back…like way back. First implemented in a computing machine in 1958 this was one of the earliest implementations of a machine-learning algorithm.

<aside> 💡 Check out the Google Colab implementation from raw numpy.

</aside>

Implemented perceptron.

Implemented perceptron.

Quick Overview

Super quick history (Cal State Long Beach):

1943

Warren McCulloch and Walter Pitts introduced the ‘neuron’ as a computational unit.

They were coming from a neuroscience view - this view included inputs, weights and an activation function.

Screenshot 2023-10-09 123205.png

1949

Donald Hebb pioneered the concept that ‘neurons that fire together wire together’ → as neurons fire with each other, they strengthen their connection.

This led to a model of adaptive learning - i.e., ‘updating weights’

1957 - 1958

Frank Rosenblatt developed the Perceptron algorithm and implemented it with the Mark 1 Perceptron machine - the first implementation of the algorithm in a computable machine.

Ok - so what does it actually do?

Given a set of data that is linearly separable (data can correctly be divided with a line), with enough time-steps it will find a linear separator that can correctly classify all the datapoints.

The below is an output from the associated Colab implementation - as you can see this was able to successfully find a linear separator that classified the data.

PerceptronClassifier_Matplot.png

Implementation of Code

Feel free to check out the Google Colab implementation.

Below is a commented implementation of the Perceptron algorithm. It includes a few basic steps so we won’t go in too much detail:

  1. Take in data, labels, and a number of episodes you want to iterate for