Neural networks have been hailed as the paradigm of choice for problems which require "Human Like" perception. Despite their ample success, their main criticism remains their opaqueness. A network could be performing its function perfectly, responding correctly to every input that it is given, however its internal workings could still be construed as a black box, leaving its user without knowledge of what is happening internally.
We are interested in different ways to tease neural networks open, to analyse what they are representing, how they are "thinking". In this context we present a novel algorithm to extract internal representations from feed-forward perceptron type neural networks. This representation encapsulates a network's function in terms of polytopic decision regions. Its features being that it is exact-- fully describing a network's function, concise-- not an incremental collection of approximations and direct-- mapping a networks input directly to its output.
These decision regions can be viewed directly, in lower dimensions , or analyzed using bounding rectangles and slicing in higher dimensions. We can even use the algorithm to watch networks during learning, seeing their decision regions and weights changing simultaneously.
We believe that not just the software, but also the physical body of a robot could be the result of an evolutionary process.
A step in this direction is the evolution of buildable lego structures, designed by the computer through the combination of genetic algorithms and physical simulation.
Examples of evolved structures are: the Lego Bridge that supports its weight on one side to reach the other; a crane and a table.
See also our project on the evolution of machines.
The CEL (Community of Evolving Learners) project is an Internet-based system where students engage in two-player, competitive, educational games. These games are straightforward and provide practice at basic skills (e.g. spelling, typing, arithmetic, geography).
The CEL project has many goals. First and foremost, we hope to demonstrate that the Internet can be used to enable a virtual learning community, where learners in disparate locations can come together and help each other advance. Second, the computer allows us to monitor students' activities and build a comprehensive model of a student's abilities. We can use student models to select individualized problems for users, based on their past interactions with the system. Third, student models can be used computationally to determine appropriate pairings of players, allowing us to control participants' win rates.
It is recognized that the use of competition in education is controversial. However, if participants are anonymous and success rates are controlled, many of the more contentious issues may be superseded in favor of the more powerful motivational aspects that competition can provide.
We define embodied evolution (EE) as evolution taking place in a population of robots. Further, we stipulate that the evolutionary algorithm is to execute in a distributed and asynchronous manner within the population, as in natural evolution. Thus, algorithms that centrally maintain and manipulate the specifications of individual agents are not permitted. We wish to create a population of physical robots that perform their tasks and evolve hands-free without human intervention of any kind, as Harvey (1995) articulates. Here, we introduce our conception of embodied evolution and report the initial results of experiments that provide the first proof-of-concept for EE.
As the complexity of objects increase, they become more
and more difficult to design. One way of making the
design process more managable is to re-use parts.
Both naturally grown systems and objects created by man
are largely modular. In this project we investigate the evolution of a system that builds objects modularly.
|
|
Following Tesauro's work on TD-Gammon, we used simple hill-climbing in a 4000 parameter feed-forward network to develop a competitive backgammon evolution function. No back-propagation, reinforcement or temporal difference learning methods were employed. Instead we start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights when the challenger wins. The success of so simple a learning method indicates backgammon deserves further study as to why hard learning is so successful in this domain.
The GNARL Project combines research in recurrent neural networks and evolutionary methods of machine learning. The Project takes its name from the GNARL (GeNeralized Acquisition of Recurrent Links) engine [Angeline, Saunders, Pollack 1994], which is the central tool used to carry out our experiments.
With regard to neural networks, the Project investigates the dymanics and capabilities of recurrent neural networks, focusing primarily on temporally-oriented tasks. With regard to evolutionary methods, the Project continues to expand and enhance to capabilities of the GNARL system.
We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).
This interactive program allows you to experiment with various neural network weights to see the fractal patterns that the network produces at different pixel resolutions.
This interactive program allows you to evolve a fractal pattern to your liking, using a "Blind Watchmaker" paradigm.
Evolution of Machines Continuing the notion of evolution of buildable objects, we focus this research on the integrated design of machines and their controllers – bodies and brains. Can bodies and brains evolve together, stimulating and constraining each other, to yield new machines and control concepts not foreseen by human engineers? Can evolutionary principles be used to automate machine design? In this project we evolve machines and brains – thousands of robots upon thousands of generations – and then “print” them into reality, to give birth to real machines created by virtual evolution. Some results can be seen here. See also Computer Evolution of Buildable Objects
|
We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).
We have developed a simulated hockey game called Shock as a test bed for studying adaptive behavior and evolution of robot controllers. Using evolutionary and co-evolutionary techniques, we have built up a battery of animat players that engage in one-on-one matches. Human users can challenge the evolved players in this unusual near-frictionless environment, to test their own skill against the animats or to act as trainers in a unique human-machine coevolutionary process.
Genetic programming is a computer learning method that imitates nature's selection process to lead a population of computer programs towards improving levels of performance.
Tron is a dynamic game, difficult for computers to learn. Playing against itself a computer might believe that is doing a good job when it is not, because it lacks a parameter (ie, a really good player) to compare.
In this experiment, we have put a genetic learning algorithm online. A "background" GA generates players by having the computer play itself. A "foreground" GA leads the evolutionary process, evaluating players by their performance against real people.