The Singularity Doesn't Require Understanding
Lincoln Cannon
15 July 2011 (updated 10 November 2024)
Neuroscientist David Linden challenges the timeline (but not the feasibility) of Ray Kurzweil’s predictions, and argues that “The Singularity is Far.” I’m skeptical.
David explains that our understanding of brain processes is progressing at a linear rate. I agree with that – in any case, he would know this better than I. However, we may not need to understand brain processes before we can emulate them in a non-biological substrate.
Perhaps we only need to be able to scan brain processes at a sufficient degree of detail and reproduce those details in a computer (along with an enabling body, either virtual or robotic)? Our ability to do both of these things appears to be advancing exponentially. In a sense, it would be like riding a bike versus understanding the physics of riding a bike. We can do the former without the latter.
David appeals to genetic sequencing as an example of our understanding persisting at a linear rate of progress, even while our technical abilities in the area progress at an exponential rate. He could have appealed to many other examples of emerging technology, too. However, none would counter the argument that reverse engineering is sufficient even without understanding why it’s sufficient. If we can reverse engineer a genetic sequence, it will work like natural genetic sequences, and we simply don’t need to understand why.
Finally, David suggests that nanobots measured in microns would be too large to work in the intricate and delicate structure of the brain. I’m sure that’s true, until nanobots actually become nanobots measured in nanometers rather than micrometers.
I don’t know when that will happen. But there are miniaturization trends that align well with Kurzweil’s predictions. For example, with 22 nanometer technology, we can fit hundreds of transistors within a single squared micrometer. And we soon may be able to fit thousands of transistors into a single cubed micrometer.
Personally, although I identify as a Transhumanist, I don’t necessarily identify as a Singularitarian. The reason is not that I reject Kurzweil’s timelines. Rather, the reason is that I think a technological singularity, something like an event horizon beyond which we are unable to predict or control our progress, is not good, either practically or morally.
Although we’re probably going to continue to experience an unprecedented rate of technological progress, we should never willingly relinquish responsibility. This will be a challenge, particularly as we continue to gather data that we do not understand (to David’s point). And it will be more challenging as this data comes in dynamic forms that can act in our world without our full understanding.
However, it is not a wholly unprecedented challenge. After all, we’ve been raising children that we don’t fully understand since the dawn of time.