Perceptrons and Neural Nets

This was an attempt to compare the effects that parallel programming has on relatively low-resource computers such as personal computers and demonstrate the capacity that perceptrons and neural nets have for parallelization.

Three levels of parallelization were tested:

Furthermore, various widths and depths of neural net were tested.

Three levels of parallelization were tested:

It was assumed that running sets of perceptron in parallel would be fastest. Parallelization of individual calculations appeared to increase overhead for little to no gain, while pure sequential processing would cause unnecessary waiting when perceptrons were in a state ready to be processed.

Requirements:

Parallel perceptron:

SEquential perceptron: