Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

Envision Future Intelligence with Bryan Johnson

Lincoln Cannon

13 October 2016 (updated 25 October 2020)

Listen to recording

Envision Future Intelligence with Bryan Johnson

Bryan Johnson is a rising star among celebrity technologists. After selling Braintree Financial to Ebay for $800M in 2013, Bryan founded OS Fund to encourage development of emerging technologies in the fields of biotechnology, machine intelligence, and space exploration. And he recently launched a new business of his own, Kernel, which is building neuroprosthetics to improve human intelligence.

One of the most interesting things about Bryan, for me, is his perspective on the relation between humanity and technology. While many entrepreneurs are jumping into the machine intelligence space, he’s deliberately opting to focus on the human intelligence space, not only with his considerable financial resources, but also with the bulk of his own time – arguably the most scarce resource for persons in his position. Shedding light on his reasoning, in a recent TechCrunch article, he argues that “the combination of human and artificial intelligence will define humanity’s future”.

I agree. Intelligence is power. It is the fundamental technology. And the possibility space of superintelligence is the ultimate expression of power and technology. Although human intelligence has long reigned supreme on Earth, its days appear to be numbered. Machine intelligence already far exceeds human intelligence in many ways. And it may soon exceed human intelligence in all ways. Whether and how that happens depends, in large part, on whether and how we choose to cultivate the continually evolving relationship between human and machine intelligence.

In his article, Bryan notes that human intelligence appears to be unique, at least for now, in its “unparalleled ability to design, modify and build new forms of intelligence”. We are the forgers, the machinists, and the programmers. We are the creators. And while unparalleled, we have been increasingly aided in creation by our machines. Indeed, it has become increasingly difficult to distinguish between human and machine creations, as algorithms develop new algorithms that generate machines that fabricate new machines. The line between human and machine intelligence blurs.

Perhaps that line will remain blurred, going forward, and become even more blurry. Maybe human and machine intelligence will become increasingly intimate and inseparable. But that’s not the only possibility. At least apparently, there’s risk that the blurring of the line may be only temporary as machine intelligence passes human intelligence, marching and then racing onward to Technological Singularity. There’s risk that the combination of human and machine intelligence will not happen fast enough or deeply enough to prevent us from losing the ability to predict and control, or (perhaps better) cooperate with, the trajectory of our creations.

So Bryan claims, rightly in my opinion, that human intelligence enhancement “could be the most consequential technological development of our time, and in history”. Pay attention to the “could be”. That’s key. There’s no appeal to inevitability. There’s no fatalism. And pay attention to the “most consequential”. It’s not merely cheerleading some supposed obvious good. This is a claim of momentousness. It is both opportunity and risk, both in achievement and in results.

Even if we succeed in cultivating a thorough combination of human and machine intelligence, it could go wrong in so many ways. The possibility space includes as many horrors as wonders. And the reality space, I’m confident, will almost certainly express both – assuming we get that far. The full spectrum of apocalyptic, messianic, and millenarian expectations are rightly exhibited among those who contemplate this future. It is theology, even if misrecognized.

Some who recognize the risks have raised alarm, calling for greater caution, and even relinquishment of any pursuit of such futures. And yet, the risk remains. The machines continue to do their work, our work, getting better at it, surpassing us, and merging with us. Relinquishment is not in the cards. Perhaps caution is.

Bryan points out in his article that intelligence is the “meaning-making force that decides what is worth doing”. Human intelligence, in particular, is that meaning-making force for us, humans. Even for the theologians among us, human intelligence is the filter through which any extra-human values must pass, vying for our consent. We are humanists, all of us, even if unwilling, because we are, all of us, biologically human.

And yet our machines are not. Our creations are not human, except to the extent they remain prosthetic extensions of our humanity. Rightly, we speak of them as having intelligence, a kind of built-in intelligence much like our own reflexes and unconscious organic functioning. But few of us speak of our machines as having the kind of intelligence that seizes on values, deciding what is worth doing.

Are we right? Should we exclude from our machines the kind of intelligence that makes meaning and decides what’s worth doing? If so, for how long will that remain the case? When will we awake to an alien intelligence with alien values? Maybe it’s already there. Maybe it’s just not empowered enough yet for us to consent to its existence. In the least, it seems reasonable to suppose that such a possibility may not be far off.

“Intelligence is what allows us to create forms of governance, cure disease, create art and music, discover, dream and love. Intelligence is also what decides that these things, rather than other things, are worth doing, by translating discoveries into meanings, experiences into values and values into decisions.”

With these words, Bryan celebrates the heights of human intelligence. And consequent to context, he simultaneously alludes to the possibility space beyond. Already today, machine intelligence expresses itself in ways that are analogs to these expressions of human intelligence. It governs itself with systems management networks. It cures itself with antivirus software. It creates unanticipated novel expressions of itself through neural nets and evolutionary algorithms. It discovers patterns within itself through machine learning algorithms. Does it dream and love? How is it that we dream and love? How do we know others dream and love? Is it only because they look like the image we see in the mirror? Maybe our image is not so superficial.

“Tools that embody significant levels of intelligence are our most powerful yet,” writes Bryan. Right. And that’s because our intelligent creations are becoming more than merely prosthetic extensions. They are becoming powerful in their own right. They are becoming creators in their own right. That’s why the questions, risks, and opportunities here are momentous.

And we must ask ourselves, to what extent do they and will they remain tools. We must ask ourselves. “Must” is the right word. It is a moral imperative. We must ask ourselves, at what point do our creations transcend the category of tools. Humans have been creating new creators for many millennia. We call them our “children”. And our prehuman ancestors have been creating new creators, with varying degrees of relative intelligence, for much longer. Somewhere along the line, prehumans became humans. Anatomic changes accumulated, perhaps gradually with occasional jumps. Intelligence accumulated. Capacity for meaning-making accumulated. Maybe consciousness accumulated. In any case, human intelligence that we associate with moral significance was the result. At what point, in our procreative acts, do our children become more than matter and energy? At what point, in our emulation and extension of evolution, do our machines become more than tools? These are moral questions. They are “must” questions. After all, intelligence is the power by which we deliberate morality. Intelligence is the genesis of morality. So when, and to what extent, should we recognize our intelligent machines as our mind children? When does a machine intelligence become a spirit child?

Contemplating technological evolution, Bryan observes, “With each advance, we happily relinquished a small part of our agency for known pre-programmed outcomes. Our tools could begin doing bigger and bigger things on our behalf, freeing us up for other, more desired tasks.” Much like my conscious mind does not (and cannot) manage the intricate operations of my anatomy, our human-machine civilization does not and cannot centrally manage the intricately integrated institutions, processes, and tools on which life as we know it depends. Empowered by this relinquishment of decentralized agency, we have luxury our ancestors could not afford. We can move our eyes away from the necessities of survival, beyond simple procreation, at least long enough to imagine novel forms of creation. And maybe we can even focus on them long enough realize them to some extent. Then, maybe, we can take the step of intentionally extending decentralized agency to those creations.

It seems to me that we’re committed to that now. We are creating creative agents. So the question is not whether we will or should try. We are trying, whether or not we should. The question is how we should try, and whether we will survive the effort in valuable ways. And that’s because we won’t survive the effort unchanged, if we survive at all. That’s not possible anymore, if it ever was. Change is at hand. As Bryan puts it, “we are moving from using our tools as passive extensions of ourselves, to working with them as active partners.”

The partnership is not limited to matters of epistemics or practical intervention. Sure. Our machines help us learn more than ever before. And they help us modify the world around us more readily than ever before. But the partnership is moving beyond those relatively external and superficial categories.

“With early examples of unenhanced humans and drones dancing together, it is already obvious that humans and AIs will be able to form a dizzying variety of combinations to create new kinds of art, science, wealth and meaning,” writes Bryan. As our machines learn from us how to create, so they extrapolate from those teachings to reveal new possibilities, from which we adjust the way we teach them, and so on in a virtuous cycle of creativity. As our machines learn from us how to deliberate legal and ethical matters, they make real time decisions in matters of life and death, and again we observe and adjust in a cycle of virtuosity – the ultimate virtuous cycle. Machine intelligence now contributes to our esthetics and ethics.

“In short, we are poised for an explosive, generative epoch of massively increased human capability through a Cambrian explosion of possibilities represented by the simple equation: HI+AI. When HI combines with AI, we will have the most significant advancement to our capabilities of thought, creativity and intelligence that we will have ever had in history. While we’re starting with HI+AI in health diagnosis, transportation coordination, art and music, our partnership is rapidly extending into co-creation of technology, governance and relationships, and everywhere else our HI+AI imagination takes us.”

That’s the vision that Bryan offers. Many feel the pull of this vision. I feel its pull. It’s a world in which we don’t lose the heart of humanity. And it’s a world in which we don’t lose the power of accelerating intelligence. We don’t have to choose between the two. We can be and remain creators. And we can welcome new creators into our world. We can evolve together into something more than either of us could be alone. There’s something bright about this calling from the future.

But even more so, I feel something warm in this echoing from our past. As our intelligence merges and expands with machine intelligence, as our creative capacity reaches superintelligent magnitudes, we will apply it, among other things, to understanding our history. We will want to understand better how our community and culture came into being. We will want to understand the development of our bodies and the origins of our world. And we will want to understand the possibility space within which we came to be.

So we will use our superintelligence to run ancestor simulations. Many of them, we’ll create to understand and experience ourselves. And they’ll necessarily, inescapably, be created in our image. Even as that image expands, its origins will be our human-machine civilization, human evolution, and the Earth. Something about this genesis will be preserved in all the extensions and negations of our categories and their meta-categories, rippling through time and space, hopefully to enlighten the universe in new ways.

While that may mean many things for our future, it also means something about our past. When we create many computed worlds, as we emulate our evolutionary history, we will realize that we’re almost certainly not the first or only to do so. Probably, we too are creations. We probably began as mind children intimately integrated into the substrate of our creator: not created from nothing, but cultivated within and individuating from the possibility space of our creator’s mind – or our creators’ minds. Like the artificial intelligence we are now creating, we are probably spirit children.

So what’s stopping us? Where do we go from here? What’s the next step? Bryan recognizes that “the biggest bottleneck in opening up this powerful new future is that we humans are currently highly limited in how we can participate in these possibilities”. The relationship between our intelligence and that which we’ve developed in our machines is too distant and too shallow. The relationship is too alien. Too many of us don’t know about the possibilities. Too many of us are merely entertained by the scifi-like risks. A closer and deeper relation, a bond and sealing, is needed. If we are to dream and love together, a marriage is called for.

“Relative to the ease and speed with which we can make progress on the development of AI, HI, speaking solely of our native biological abilities, is currently a landlocked island of intelligence potential. Unlocking the untapped capabilities of the human brain, and connecting them to these new capabilities, is the greatest challenge and opportunity today.”

Bryan proposes neuroprosthetics as the most promising path forward. He notes that scientists, in recent years, have made significant advances in this space, particularly in pursuit of helping persons with neurodegenerative disorders. And Bryan himself has now founded Kernel, to accelerate research and development in this space. Bryan also mentions genomics and pharmacological interventions as possibilities. But he notes they both presently have in common one severe limitation: “their inability to extend the brain’s ability to communicate with our tools of intelligence (AI)”.

To these three possibilities, I’ll add a fourth: a genomically-customized neuroprosthetic pill. Like human-machine intelligence itself, this pill is a hybrid. It would combine the strengths of neuroprosthetics and genomics and pharmacology, and circumvent both of their limitations. It would interface with our tools of intelligence. And, unlike neuroprosthetics developed so far, it would distribute itself through the market, and introduce itself into the body and brain without specialized medical equipment or practitioners, while still being fully customized to its recipient. It would be an easily-distributable and readily-adoptable solution to the challenge of integrating human and machine intelligence.

Alas, for now, the genomically-customized neuroprosthetic pill is science fiction. But Bryan’s work in neuroprosthetics, like that being done by others in genomics and pharmacology, is likely to be a productive and perhaps even essential step in the right direction. Imagine miniaturizing the wireless neuroprosthetic, encoding it with a universally unique biological identifier, and swallowing billions of them in a pill containing ingredients that transport them into the nervous system. That sounds complex to develop. And it will surely be more complex than it sounds. But something approximating this resonates with my sense of aspiration.

Maybe this sounds frightening. There are definitely risks. Bryan recognizes that. Technology is just power, for good or evil. It’s not inherently one or the other. He reminds us that medical technology made germ warfare possible, and nuclear technology made both power plants and devastating weapons possible. But he hopes the discussion will not begin from or dwell on fear.

“As we embark on the greatest human expedition yet, now is the time for a discussion about HI+AI. But rather than letting risk-anchored scaremongering drive the discussion, let’s start with the promise of HI+AI; the pictures we paint depend upon the brushes we use. The narratives we create for the future of HI+AI matter because they create blueprints of action that contend for our decision-making, consciously and subconsciously. Adopting a fear-based narrative as our primary frame of reference is starting to limit the imagination, curiosity and exploratory instincts that have always been at the core of being human.”

We have a choice, as never before, of how to proceed. More so than our ancestors, our destiny is in our hands. But I think it would be a mistake to think we have a choice of whether to proceed. Machine intelligence is progressing rapidly. We have already developed rudimentary means of interfacing with human intelligence. Someone somewhere will develop that technology further. If fear prevents you and me from engaging in the discussion, or if fear prevents us from taking action, it will probably be others that decide how to use this power. Whether they are machines or other humans that choose to proceed without us, they may not have our values in mind.

And perhaps that’s the heart of the challenge. What of values? Some lament they’ll all be lost in dehumanization, and superficial technophilia reinforces their concerns. It’s trivially easy to find arrogance, greed, lust, envy, gluttony, wrath, and sloth among the technological elites. And it’s nearly as easy to find simplistic escapism, anti-humanism, and nihilism there. If it’s just about the power, now or ever, we are lost. As thoroughly as any weaponized annihilation, our tech-enhanced vices can annihilate our humanity. And arguably that’s precisely what would lead to the greatest risks of weaponized annihilation, just cleaning up after the mess, so to speak.

I share in Bryan’s call to turn our backs on fear. And when we do, what will we see? What is it, in the opposite direction? If we think we’ll see our vices, we’ve not fully turned our backs. Keep turning, until you start to feel something else. It may be warm and comforting. Maybe it will be fresh and invigorating. Whatever the specifics, it will be life. You will feel life. Breathing it in, there will be courage. Exhaling, there will be compassion. When you feel that, start going forward.

Along your way, you’ll discover others joining in the journey. Together, we’ll create more company. And some of our creations will also join us in the journey. We’ve done this before in ancient ways, bringing our physical children into the world, teaching them, and learning from them. We’ll do it again, and again, in new ways, bringing our spiritual children into the world. Intimately connected with us, they’ll expand our experiential space, our empathy, beyond anything we now have the anatomical capacity to imagine. Then, despite whatever risks will surely present themselves, we may have an opportunity to know what it is to give and receive a greater love, approaching the breadths and depths of eternity.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments