This directory contains a set of M-files, runnable in both Matlab and Octave. This is a self-contained version of our demonstrational implementation of a simple multilayered perceptron (A fully connected feed-forward artificial neural network) with steepest descent training, and two faster heuristic learning algorithms. It contains the bare minimum of features for students to examine and learn how an MLP can be implemented. There are also lots of wordy comments in these, because these are intended as a course material; just delete them if they make it harder to understand for you, personally :). The computations are based on the layer-wise matrix formulation which is documented at least in the article "Robust Formulations for Training Multilayer Perceptrons" by Tommi Kärkkäinen and Erkki Heikkola, published in Neural Computation, April 1, 2004. I apply the matrix computations to the whole data matrix at once (because I can:)), but for real-life purposes you should probably compute on a single input vector at a time, to reduce the memory requirements. Maybe. At least if you have masses of data. This has been used as material on the courses "Data Mining" and "Computer Vision" held at the Department of Mathematical Information Technology, University of Jyväskylä. Handle with a healthy amount of suspicion, and drop a note if you find bugs or thinkos; they should be gotten rid of so that future students get as clear a grasp of the topic as possible. This code is derived from our research code, published under the MIT license. Let us use the same license for this as well. No warranties or support promises. Author: Paavo Nieminen