“The fool hath said in his heart, There is no God.” Psalm 14:1
Many computer scientists think the age of the self-replicating, evolving machine may be upon us.
It is an idea that has been around for a while – in fiction. Stanislaw Lem in his 1964 novel The Invincible told the story of a spaceship landing on a distant planet to find a mechanical life form, the product of millions of years of mechanical evolution. It was an idea that would resurface many decades later in the Matrix trilogy of movies, as well as in software labs.
In fact, self-replicating machines have a much longer, and more nuanced, past. They were indirectly proposed in 1802, when William Paley formulated the first teleological argument of machines producing other machines.
In his book Natural Theology, Paley proposed the famous “watchmaker analogy”. He argued that something as complex as a watch could only exist if there was a watchmaker. Since the universe and all living beings were far more complex than a watch, there had to be a God – a divine watchmaker. Interestingly, Paley conceded that his argument would be moot if the watch could make itself. This detail has been forgotten during the cultural wars that followed Darwin’s publication of On the Origin of Species.
Self-replicating machines have been around, at least in theory, for decades. In 1949, the mathematician John von Neumann showed how a machine could replicate itself. He called it the “universal constructor” because the machine was both an active component of the construction and the target of the copying process.
This means that the medium of replication is, at the same time, the medium of storage of the instructions for the replication. Von Neumann’s big idea allowed open-ended complexity, and therefore errors in the replication – in other words, it opened up self-replicating non-biological systems to the laws of evolution. His brilliant insight predated the discovery of the DNA double helix by Crick and Watson. He went on to develop mathematical entities that reproduced themselves and evolved over time, which he called “cellular automata”.
Although von Neumann’s model initially worked only in mathematical space, it was a clear demonstration that evolution may influence mechanical evolution. Since then, engineers have taken the principle on board and have produced physical applications such as RepRap machines – 3D printers that can print most of their own components.
The next logical step would be to apply these principles in robot reproduction. For instance, we could have a robotic factory with three classes of robots: one for mining and transporting raw material, one for assembling raw materials into finished robots and one for designing processes and products. The last class, the “brains” of the autonomous robotic factory, would be artificial intelligence systems. But could these robots also “evolve”?
The Victorian novelist Samuel Butler thought so. He was a contemporary of Charles Darwin, who spent 20 years of his life attacking the foundations of Darwinism. Butler was not so much against the idea of evolution per se. His tiff with Darwin revolved around the role of intelligence. For Butler the intelligence of evolution and the evolution of intelligence showed common principles, of which life was at the same time both the cause and the result. On this basis, he concluded “it was the race of the intelligent machines and not the race of men which would be the next step in evolution”.
In his novel Erewhon (an anagram of “nowhere”) he describes a utopian society that opted to banish machines. They were deemed to be dangerous, a notion that has influenced fiction, and non-fiction, to our day. The economist Tyler Cowen, in his recent book Average is Over, warns that thinking machines will take our jobs.
Safety legislation impedes, although it does not preclude, the development of a fully autonomous robotic factory that reproduces itself. But planting such a factory on a distant planet is a different story. Mars colonisation could benefit from self-reproducing robots preparing the planet for human habitation. The physicist and visionary George Dyson has proposed using self-replicating robots to cut and ferry water-ice from Enceladus (a frozen moon of Saturn) to Mars and use it to terraform the Red Planet.
Some biologists believe that life on Earth started on Mars, the seeds of our biosphere carried here by meteorites blasted off the Martian surface billions of years ago. If that is true, it would then be an irony of apocalyptic proportions if future intelligent machines from Mars rebelled against their originators, attacked Earth in order to colonise it and got rid of the current inhabitants. Unlike HG Wells’s fictional invaders, these robotic Martians of the future would be impervious to biological germs. (But perhaps not to computer viruses.)
Ultimately, the question whether self-reproducing robots will evolve or not boils down to the capability of artificial intelligence systems to self-improve. Only then could the “brains” of the robotic factory build evolved robots without the need of human designers.
It’s already happening. Machine learning has been around for years. New algorithms for data analysis, combined with increasing computer power and interconnectedness, means that intelligent machines will be able to comprehend massive amounts of contextual information. They would not only be able to understand what a piece of information is about, but how it relates to other information. The capability to understand correlations and get “the big picture” could potentially enable them to set their own goals. Already there are autonomous robotic systems that do that, military drones being an example. Self-improvement could be next.
Perhaps by exploring and learning about human evolution, intelligent machines will come to the conclusion that sex is the best way for them to evolve. Rather than self-replicating, like amoebas, they may opt to simulate sexual reproduction with two, or indeed innumerable, sexes.
Sex would defend them from computer viruses (just as biological sex may have evolved to defend organisms from parasitical attack), make them more robust and accelerate their evolution. Software engineers already use so-called “genetic algorithms” that mimic evolution.
Nanotechnologists, like Eric Drexler, see the future of intelligent machines at the level of molecules: tiny robots that evolve and – like in Lem’s novel – come together to form intelligent superorganisms. Perhaps the future of artificial intelligence will be both silicon- and carbon-based: digital brains directing complex molecular structures to copulate at the nanometre level and reproduce. Perhaps the cyborgs of the future may involve human participation in robot sexual reproduction, and the creation of new, hybrid species.
If that is the future, then we may have to reread Paley’s Natural Theology and take notice. Not in the way that creationists do, but as members of an open society that must face up to the possible ramifications of our technology. Unlike natural evolution, where high-level consciousness and intelligence evolved late as by-products of cerebral development in mammals, in robotic evolution intelligence will be the guiding force. Butler will be vindicated. Brains will come before bodies. Robotic evolution will be Intelligent Design par excellence. The question is not whether it may happen or not, but whether we would want it to happen. source – Telegraph UK