This directory contains a set of M-files, runnable in both Matlab and Octave. They are a demonstrational implementation of a simple multilayered perceptron (A fully connected feed-forward artificial neural network) with steepest descent training, and two faster heuristic learning algorithms. I wrote a load of comments, because this is course material; just delete them if they make it harder to understand :). The computations are based on the layer-wise matrix formulation which is documented at least in the article "Robust Formulations for Training Multilayer Perceptrons" by Tommi Kärkkäinen and Erkki Heikkola, published in Neural Computation, April 1, 2004. I apply the matrix computations to the whole data matrix at once (because I can:)), but for real-life purposes you should probably compute on a single input vector at a time, to reduce the memory requirements. Maybe. At least if you have masses of data. This is course material for the "Data mining" course held in 2008 at the Department of Mathematical Information Technology, University of Jyväskylä. I fixed a couple of bugs compared to the version showed on the "Computer vision" course a couple of weeks ago, which can be regarded as a proof of existence of bugs in this one as well :). Handle with a healthy amount of suspicion! This code is derived from our research code, to be published later under the MIT license. I'm the sole author, so I suppose it's in my power to license this version for any uses. No warranty. I'd be happy to hear if you're using this, so please drop an email if you do. Or if you find bugs. Thanks. Author: Paavo Nieminen . 2008-11-22