The Golem Project

Automatic Design and Manufacture of Robotic Lifeforms
Hod Lipson and Jordan B. Pollack

FAQ and Answers to Common Journalist Questions:

Who Are you guys?

We would appreciate you representing it as work by both Dr Hod Lipson and Dr Jordan Pollack. Hod Lipson is Research Scientist at Brandeis with a recent Ph.D in Mechanical Engineering from the Technion. He did most of the work. Jordan Pollack is an associate professor of computer science and complex systems at Brandeis who directs the Dynamical and Evolutionary Machine Organization (DEMO) laboratory. Jordan has a long history of contributions in scientific fields like AI, cognitive science, neural networks, machine learning, evolutionary computation, and robotics, and is also an entrepreneur and an advisor to several internet companies. You can get pictures of us from our homepages or the official photo here.

The general laboratory site is demo.cs.brandeis.edu; information on this project "Genetically Organized Lifelike Electro-Mechanics" (GOLEM) is at demo.cs.brandeis.edu/golem.

Where did this work come from?

Jordan started planning a project in body-brain coevolution in 1992 while at Ohio-State University, and developed an extensive portfolio in coevolutionary learning, notably in game playing, language, and problem solving through the 1990's funded by ONR. A laboratory infrastructure for robotics - the machine shop and electronics facilities, was funded by NSF when he moved to Brandeis in 1996. The most recent work relevant to the GOLEM work was published by Pablo Funes and Pollack in the Artificial Life Journal in 1999, and involved evolution of buildable structures using Lego bricks.

Lipson's background is extensive in Computer Aided Design and he is expert in the field of Design Automation, the use of computers to amplify human design activities. He joined DEMO Lab in 1998. They received funding from DARPA in July 1999, under a long range exploratory program called "Information Technology Expeditions" and this Nature article reports on accomplishments of the first six months of their project.

What is the significance of the work?

Only history will tell. Our view is that much of robotics for 50 years has been done wrong, where a "body" is built by engineers, and the fact that a human can control the puppet leads them to assume a control system or computer program can do the same. In nature, there never is a body without a brain, and that constaint is maintained by our work. While the bodies and brains are simple now, it is just the beginning our our long-range project.

We also think "fully-automated design" and manufacturing points to a solution to the problem of robot-economics, where the cost of design and construction of robots has made them prohibitive except to hollywood and military. If we can get human costs out, we might be able to have robots which don't have to be justified through mass production like ATM's and Inkjet Printers.

Finally, the field of computer-aided-design can be brought to the next level of "semi-automated design", where automatic design programs work hand and hand with human engineers to speed up the general process. We see a prototypes in Lipson's Phd thesis on the sketch system, and in our evo-cad system for interactive design of lego structures.

Where do you plan to take this work next?

We are aiming for machines with many more moving parts and more complex task structures. Robots which are truly embedded into and react to their environment. The current work was simply a publication of the first ever prototype of body-brain coevolution and automatic manufacture. And it was hardly co-evolution at all. And the machines has no sensors and are not reactive to their environment. We are building more complex virtual environments than the infinite plane, and giving robots simulated sensors, and harder tasks than locomotion. We are looking into generative schema rather than just mutative structures, using grammatical formalisms like L-systems.

We are looking at environments with other robots in them playing survival games. And we are importing evolutionary, co-evolutionary, and symbiotic techniques from our other research projects. We are also trying to hire an electronics technician to help us integrate more motors and sensors.

What are the long-term applications for systems like yours?

While we feel this is a neat advance in the science of evolutionary robotics and Artificial Life, we think the real impact may be a new industry in 5-10 years. If our robots can be designed and manufactured "one-off" without human engineering and labor costs to amortize through mass production, truly a whole new realm of low cost robotics becomes enabled. We can immediately envision a collection of 151 wacky toy robots as popular as pokemon, to automatic cleaning machines specific to certain environments, like after a football game at a particular stadium, to robots cheap enough to find and destroy landmines in different parts of the
world, to fixed industrial assembly applications for short term low volume production.

The key idea is that a dumb machine evolved to a single environment is easier to construct than a machine which has to work in many environments or a humanoid robot which has to be intelligent enough to do multiple tasks.

We can also in the longer term envision a robotic factory inside a tractor trailer, or on an ocean-going platform, or a satellite, which is brought to the scene and produces machines which are rated for their performance in the actual task environment, leading to real-time on-site evolution.

What new technologies would be needed to realize these applications?

There are 3 technologies in progress separately which have their own progress and internal drives. Physical/Mechanical Simulation (CAD/CAM), Robotic Factories (3d systems, additive manufacturing, replicator, MEMS), and Evolutionary Design.

Driven by the needs of industrial designers, CAD software is a profitable business, and more and improved features appear regularly in programs like Pro/Engineer and AutoCAD, including improved simulations in various domains of mechanisms. Physical simulation is a field which continues to make steady progress, and is now being driven by video games and entertainment needs.

There is movement in the area of replication as well. We used the cheapest available technology, a 3D printer which melts plastic and prints it in layers. Our "fab" was thus only $50,000. We could have a $2M or a $10M "fab" which would build bigger devices, out of stronger material. Sony reportely has a robot "fab" for most small consumer electronics. In fact, in a decade there may be machines which can print mechanical, electric, and electronic logic componentry together on a small scale. This field is called Micro-electronic-mechanical-systems, and we may partner with a lab having good facilities.

While CAD and RP are hooked up to each other, they are mainly driven through software interfaces by teams of highly trained human engineers, and do not have the proper interfaces in place for control by software rather than humans. And they are only fast enough to keep humans from boredom on a single design, which isnt fast enough for us. While we anticipate advances in commercial simulation, we use our own minimal simulators for the time being.

So the capacity to manufacture complex systems is going to be in place, but neither teams of humans nor the design software of today is capable of designing to its full capacity. That is where our core work in co-evolutionary design comes in.

Most humans drastically underestimate the complexity, or the number of unique moving parts in an animal body or lines of code comprising a brain, which is many orders of magnitude more than those involved in an automobile, a space shuttle, or even Microsoft Windows 2000! There is a difference between hardware advances, like 1M or 10M ram chips, and software complexity, which has not advanced in 25 years. Another million copies of the same mechanism may be an engineering feat in hardware, but in software it is just one more line of code! In some sense, windows is just DOS with wallpaper, and is not significantly more complex from a biological point of view. Most of the work in the DEMO lab is about getting to the heart of the problem of automatic design of biologically complex systems, using co-evolution, and understanding the principles and processes underlying the emergence of complex structures in nature. It is esoteric, involving non-zero sum games, symbiosis, modularity devices, open languages, and dynamical systems theory. The robots are a practical and demonstration of our research.

There are many negative science fiction themes, like Terminator, related to your research. You referred yourself to Bill Joy's article in WIRED. How can we prevent technology like this from getting out of control?

Terminator was based on all the powerful Sun computers and Cisco routers in "Skynet" (wireless internet satellites) borging into a hostile alien intelligence which took control of the industrial infrastructure and started making machines to exterminate humanity, one of which looked like Arnold Schwarzeneger. Forget about time travel!

But the theme of robots running amuck as sorcerers and shapeshifters run through fantastic and paranoid literature. Vernor Vinge's True Names explored the theme of AI emerging on the internet fairly early and is still a good warning. My favorite AI/robot novels are Philip Dick's "Vulcan's Hammer," and Marge Piercy's "He She and It". Bruce Sterling, on Slashdot, claimed our robots were actually invented by him in his story Talamarkan.

While we have achieved replicating robots, we have not achieved SELF-replicating robots, because the robots produced by our robotic system (composed of evolution algorithm, physical simulation, and robotic manufacture) are not capable of further production. They are at this point, little more than toys, with the brains of a bacterium. We hope to achieve insect status in a couple of years.

What about that self-replication problem?

While we are contemplating the conditions necessary for electromechanical self-replication, it is not a goal of the research. Self-replication is very easy in software and cellular automata. What is harder is to find the "bootstrap".

Just as a forge, a mill, and a lathe lead to more mills and lathes, and ultimately to the industrial revolution, and just as understanding stored program computers and digital communications lead to all of modern computer science and the internet, our set of robotic technologies could someday lead to a self-sustaining "bootstrap" and to more complex robotically-designed and robotically constructed robots. It could even be a new industrial revolution.

But the fears of AI/Robots replicating out of control, as expressed by Bill Joy in a Wired essay, are not at all justified. We are far from a collection of humanoid robots operating a machine shop making more humanoids. We are also far from a nanoscale chemical goop which absorbs and takes over all matter. By requiring an industrial infrastructure of tools, computers, power, and motors, our system is truly unlikely to lead to an out-of-control electromechanical robot colony, that would, like Vonnegut's Ice Nine, grow by eating obsolete computers and fax machines.

Joy's warnings are perhaps more relevant for engineered agriculture or chemical viruses perhaps, or visualbasic macroviruses.

With our robots, it may be the case that because no human designed them, no human would understand their inner workings. We noticed this problem with "alien programming style" as early as Pete Angeline's evolved Tic Tac Toe player in 1993. But most of us don't even understand the inner workings of a CD player! So what is to fear? Machines are simply tools leading to further prosperity of humanity, and only those machines which form economic virtuous circles survive. They are not competitors or replacements for humans.

The replication process seems quite complex, and appears to rely upon a prototyping machine. Is one machine then controlling the actions of another?

Yes, the designing machine is controlling the fabrication machine. But we can think of both these units as modules of one larger machine that designs and fabricates, or of the whole thing (the designing, fabricating, and the robot) as parts of one big robot that can design and make another robot.

It's difficult to imagine how an inanimate object can self-generate. Wouldn't an outside source have to provide extra bars and actuators? Is it that the prototyping machine is allowing the "starting population" to evolve by itself into the best machine suitable for forward locomotion?

Note that we are not claiming "self replication", so the question of how a printing machine can generate another printing machine is still
irrelevant. Nor are we claiming to have achieved a "bootstrap", where machines start to replicate out of nothing (as life emerged on earth, more or less). We are claiming "merely" automatic design and manufacture. So we have to use the word "self-" with caution. But
generally speaking (and we are thinking ultimately about self-replication and bootstrapping), even in biological life there is always an external source that supplies the building blocks and energy. Life is always in context of an environment. The amino acids our biological cells use to reproduce come from external sources that we eat. So the ultimate question is the difference in complexity between what goes in the process (given by the environment) and what comes out (the product of the design and manufacture). In nature, the difference is huge (in goes air, sugars and water, out comes a human). In our lab, the difference is much smaller, but we claim it is along the same principles.

How is this work different from traditional robotics?

For decades, researchers and engineers have been manually programming robots and laboriously designing and building robot-bodies in attempt to make them behave in ways similar to what we observe in nature. But robotics has not gone very far in over forty years of research because we drastically underestimate the complexity involved. What we are trying to do is to avoid manually programming behaviors (and designing bodies) altogether. We hypothesize that if we understand evolutionary principles well enough, we can get those behaviors and bodies to organize on their own without human involvement, just as they did in nature. So I would say we, and other reserachers in evolutionary robotics and artificial life, are going down a path very different from traditional robotics.

What is the evolutionary process doing exactly?

Roughly, this is what happens: we have a set of building blocks (bars, actuators and neurons), and a set of "operations" that can join building blocks together, take them apart, or modify their dimensions. Then, we start from scratch with empty robots, each with zero bars, zero actuators and zero neurons, and let the computer apply operations at random. After each operation is carried out, the computer measure the performance of the robot in a simulator. If it is better than average performance (speed, in our case), the computer make more of it, and if it is less than average, the computer removes it. That's it. We let this thing run for a day, and see what happens: Obviously, in the beginning most robots are just innate piles of random building blocks and their performance is zero. By chance, after, say, a hundred generations, a particular group of building blocks happen to assemble in such a way that something moves a little. That accidental assembly is then replicated because it has above-average performance. After many more generations, but essentially the same kind of progress steps, we see robots that look like they were designed, but really they are just the outcome of this simulated natural selection.

What is the role of the simulator ?

I mentioned a simulator in which the performance is evaluated. This simulator creates a virtual world that simulates the physics of the real world (like a gaming simulator). The fabrication machine is used only at the end, after mature robots have developed. We use the simulator because it is faster than reality, and we can test hundreds of robots in a second. It is foreseeable, however, that in the future the evaluation will be done in reality, not in simulation, and some members of our lab - Richard Watson and Sevan Ficici, have demonstrated this for the brain (not the body) of a robot.

What is a linear actuator?

A linear actuator creates linear motion, like a piston. A linear actuator is best thought of as a mechanical muscle. The main difference between a biological muscle and a linear actuator is that a muscle can only contract, while the linear actuator and contract and expand (but often in biological muscles come in pairs so this is not really important difference).

What is Golem@Home?

Check out our download page and our special Golem@Home frequently asked questions page.



Copyright (c) 2000
Lipson & Pollack

 Comments?
lipson@cs.brandeis.edu