DEMO Home

Research Projects


---

Exact Representations from Feed-Forward Neural Networks

Neural networks have been hailed as the paradigm of choice for problems which require "Human Like" perception. Despite their ample success, their main criticism remains their opaqueness. A network could be performing its function perfectly, responding correctly to every input that it is given, however its internal workings could still be construed as a black box, leaving its user without knowledge of what is happening internally.

We are interested in different ways to tease neural networks open, to analyse what they are representing, how they are "thinking". In this context we present a novel algorithm to extract internal representations from feed-forward perceptron type neural networks. This representation encapsulates a network's function in terms of polytopic decision regions. Its features being that it is exact-- fully describing a network's function, concise-- not an incremental collection of approximations and direct-- mapping a networks input directly to its output.

These decision regions can be viewed directly, in lower dimensions , or analyzed using bounding rectangles and slicing in higher dimensions. We can even use the algorithm to watch networks during learning, seeing their decision regions and weights changing simultaneously.


Computer Evolution of Buildable Objects

We believe that not just the software, but also the physical body of a robot could be the result of an evolutionary process.

A step in this direction is the evolution of buildable lego structures, designed by the computer through the combination of genetic algorithms and physical simulation.

Examples of evolved structures are: the Lego Bridge that supports its weight on one side to reach the other; a crane and a table.

See also our project on the evolution of machines.


Internet Community of Evolving Learners

The CEL (Community of Evolving Learners) project is an Internet-based system where students engage in two-player, competitive, educational games. These games are straightforward and provide practice at basic skills (e.g. spelling, typing, arithmetic, geography).

The CEL project has many goals. First and foremost, we hope to demonstrate that the Internet can be used to enable a virtual learning community, where learners in disparate locations can come together and help each other advance. Second, the computer allows us to monitor students' activities and build a comprehensive model of a student's abilities. We can use student models to select individualized problems for users, based on their past interactions with the system. Third, student models can be used computationally to determine appropriate pairings of players, allowing us to control participants' win rates.

It is recognized that the use of competition in education is controversial. However, if participants are anonymous and success rates are controlled, many of the more contentious issues may be superseded in favor of the more powerful motivational aspects that competition can provide.


Coevolving the "Ideal" Trainer

Coevolution provides a framework to implement search heuristics that are more elaborate than those driving the exploration of the state space in canonical evolutionary systems. However, some drawbacks have also to be overcome in order to ensure continuous progress on the long term. This project introduces the concept of coevolutionary learning and presents a search procedure which successfully addresses the underlying impediments in coevolutionary search. This algorithm is presented in the context of the evolving cellular automata rules for a classification task. This work resulted in a significant improvement over previously known best rules for this task.

Embodied Evolution

We define embodied evolution (EE) as evolution taking place in a population of robots. Further, we stipulate that the evolutionary algorithm is to execute in a distributed and asynchronous manner within the population, as in natural evolution. Thus, algorithms that centrally maintain and manipulate the specifications of individual agents are not permitted. We wish to create a population of physical robots that perform their tasks and evolve hands-free – without human intervention of any kind, as Harvey (1995) articulates. Here, we introduce our conception of embodied evolution and report the initial results of experiments that provide the first proof-of-concept for EE.


Automated Modular Design

As the complexity of objects increase, they become more and more difficult to design. One way of making the design process more managable is to re-use parts. Both naturally grown systems and objects created by man are largely modular.

In this project we investigate the evolution of a system that builds objects modularly.



Learning Backgammon

Following Tesauro's work on TD-Gammon, we used simple hill-climbing in a 4000 parameter feed-forward network to develop a competitive backgammon evolution function. No back-propagation, reinforcement or temporal difference learning methods were employed. Instead we start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights when the challenger wins. The success of so simple a learning method indicates backgammon deserves further study as to why hard learning is so successful in this domain.


The GNARL Project

The GNARL Project combines research in recurrent neural networks and evolutionary methods of machine learning. The Project takes its name from the GNARL (GeNeralized Acquisition of Recurrent Links) engine [Angeline, Saunders, Pollack 1994], which is the central tool used to carry out our experiments.

With regard to neural networks, the Project investigates the dymanics and capabilities of recurrent neural networks, focusing primarily on temporally-oriented tasks. With regard to evolutionary methods, the Project continues to expand and enhance to capabilities of the GNARL system.


Fractal Neural Networks

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).

This interactive program allows you to experiment with various neural network weights to see the fractal patterns that the network produces at different pixel resolutions.


Fractal Neural Networks: The Blind Watchmaker

This interactive program allows you to evolve a fractal pattern to your liking, using a "Blind Watchmaker" paradigm.


Evolution of Machines

 

Continuing the notion of evolution of buildable objects, we focus this research on the integrated design of machines and their controllers – bodies and brains. Can bodies and brains evolve together, stimulating and constraining each other, to yield new machines and control concepts not foreseen by human engineers? Can evolutionary principles be used to automate machine design? In this project we evolve machines and brains – thousands of robots upon thousands of generations – and then “print” them into reality, to give birth to real machines created by virtual evolution. Some results can be seen here.

See also Computer Evolution of Buildable Objects

 


Mind's Eye Project

Following our realization that the dynamics of recurrent neural networks generated fractal IFS like images, we have reduced the model down to a twelve weight network. We use hillclimbing to find networks whose dynamics are interesting.
Here is a gallery of images....

RAAM: Recursive Auto-Associative Memory

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).


Simulated Hockey

We have developed a simulated hockey game called Shock as a test bed for studying adaptive behavior and evolution of robot controllers. Using evolutionary and co-evolutionary techniques, we have built up a battery of animat players that engage in one-on-one matches. Human users can challenge the evolved players in this unusual near-frictionless environment, to test their own skill against the animats or to act as trainers in a unique human-machine coevolutionary process.


Coevolutionary Genetic Programming

Using a competitive fitness, we were able to evolve an elegant solution to the intertwined spirals problem. This function uses 52 GP primitives and breaks the plane into two subproblems which combine to form a spiral. You can play with a live matlab demo of this work!

Symbiosis and its Role in Evolutionary Adaptation

Our investigations concern the role of symbiosis as an enabling mechanism in evolutionary adaptation. Biological evidence suggests that symbiosis has been a critical factor in several 'major transitions in evolution'. "Symbiogenesis" is the genesis of new species via the genetic integration of symbionts. This combination of pre-adapted parts is a fundamentally different source of evolutionary innovation from the Darwinian, gradual accumulation of 'blind' variations. Though the fact of symbiogenesis in the natural history of evolution is established, the role of symbiosis is not yet integrated into our theories of adaptation. In fact, mutually benefitial relationships are generally treated as a curio - an aberration on the otherwise relentless path of mutually exclusive competition. Our research attempts to understand when and how symbiosis will be significant in evolutionary adaptation, and to provide a coherent model of evolutionary adaptation that explains the balance between individualism and higher-levels of selection.

Internet Online Genetic Algorithms

Genetic programming is a computer learning method that imitates nature's selection process to lead a population of computer programs towards improving levels of performance.

Tron is a dynamic game, difficult for computers to learn. Playing against itself a computer might believe that is doing a good job when it is not, because it lacks a parameter (ie, a really good player) to compare.

In this experiment, we have put a genetic learning algorithm online. A "background" GA generates players by having the computer play itself. A "foreground" GA leads the evolutionary process, evaluating players by their performance against real people.



---


Comments?
demoweb@demo.cs.brandeis.edu