DEMO Home

Neural Networks and Dynamical Systems


---

Evolution of Neural Controllers

This project addresses the problem of creating neural controllers for robots.

The first part of this project involved developing controllers for simple robots to perform a simple task - pole balancing and Luxo locomotion. In the second part we move to more complex robots performing the more complex task of pursuit and evasion. Finally we show that we can evolve controllers in simulation that transfer to complex robots, such as AIBO.


Exact Representations from Feed-Forward Neural Networks

Neural networks have been hailed as the paradigm of choice for problems which require "Human Like" perception. Despite their ample success, their main criticism remains their opaqueness. A network could be performing its function perfectly, responding correctly to every input that it is given, however its internal workings could still be construed as a black box, leaving its user without knowledge of what is happening internally.

We are interested in different ways to tease neural networks open, to analyse what they are representing, how they are "thinking". In this context we present a novel algorithm to extract internal representations from feed-forward perceptron type neural networks. This representation encapsulates a network's function in terms of polytopic decision regions. Its features being that it is exact-- fully describing a network's function, concise-- not an incremental collection of approximations and direct-- mapping a network's input directly to its output.

These decision regions can be viewed directly, in lower dimensions , or analyzed using bounding rectangles and slicing in higher dimensions. We can even use the algorithm to watch networks during learning, seeing their decision regions and weights changing simultaneously.


Fractal Neural Networks

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).

This interactive program allows you to experiment with various neural network weights to see the fractal patterns that the network produces at different pixel resolutions.


Fractal Neural Networks: The Blind Watchmaker

This interactive program allows you to evolve a fractal pattern to your liking, using a "Blind Watchmaker" paradigm.


Mind's Eye Project

Following our realization that the dynamics of recurrent neural networks generated fractal IFS like images, we have reduced the model down to a twelve weight network. We use hillclimbing to find networks whose dynamics are interesting.
Here is a gallery of images....

RAAM: Recursive Auto-Associative Memory

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).


The GNARL Project

The GNARL Project combines research in recurrent neural networks and evolutionary methods of machine learning. The Project takes its name from the GNARL (GeNeralized Acquisition of Recurrent Links) engine [Angeline, Saunders, Pollack 1994], which is the central tool used to carry out our experiments.

With regard to neural networks, the Project investigates the dymanics and capabilities of recurrent neural networks, focusing primarily on temporally-oriented tasks. With regard to evolutionary methods, the Project continues to expand and enhance to capabilities of the GNARL system.

---


Comments?
demoweb@demo.cs.brandeis.edu