Search Lincoln Cannon Menu
Lincoln Cannon

Thanks for visiting!

My Biases on Bryan Johnson's Plan to Save Humanity

Lincoln Cannon

17 May 2018 (updated 2 December 2024)

Listen to recording

"Vision" by Lincoln Cannon

Tech entrepreneur Bryan Johnson has written a brief plan for the future of humanity. He’s concerned about global catastrophic risks associated with technology. And he intends his plan to start a conversation.

Bryan prefaces his plan with the disclaimer that he, like everyone else, has biases and shortcomings. So he’s interested in hearing and understanding other perspectives on his plan. To that end, here are my own biased thoughts.

Bryan says that less than 1% of humanity is future literate. He defines future literacy as the ability to forecast approximate milestones and create the capacity to reach them, regardless of contextual change.

This sounds to me a lot like a definition of intelligence-in-general, which I consider to be the capacity to set and achieve goals across diverse contexts. With my definition of intelligence in mind, and relating it to Bryan’s definition of future literacy, I’d say nearly 100% of humanity is future literate. The problem is not an absolute lack of future literacy or intelligence. The problem is that many of us, I think rightly, aspire to greater general future literacy and intelligence.

So I agree with Bryan that we can and should do much better. But I think he’s positioned his assessment in a manner that’s too black-and-white and insufficiently generalized. It’s too black-and-white because future literacy in not an all-or-nothing capacity. And it’s insufficiently generalized because future literacy, as defined, seems indistinguishable from intelligence-in-general, and misrecognizing that may inhibit effective efforts at remediation.

Following his definition of future literacy, Bryan remarks that, “if enough of us become future literate, we stand a chance of surviving ourselves.” I agree with this observation, particularly when understood to be about intelligence-in-general.

As we become better at setting and achieving goals across diverse contexts, we demonstrate that we are more intelligent. Again, this is not a hypothetical. It is simply true by definition.

Step One: Self Reflect

Bryan then proceeds to describe the first of seven steps in his plan, which is to self reflect. He notes prevalent bias among humans, including the meta-bias that most of us think ourselves less biased than most others. And he characterizes prevalent human bias as a global catastrophic risk, “as real as an asteroid half the size of earth barreling towards us.”

He’s right, undoubtedly, about the prevalence and risk of bias. But he frames bias and risk in a manner that seems rather absolutist and lop-sided to me. What I mean by that is that he seems to be suggesting, intentionally or not, that there’s a possibility of non-bias, if only we can become intelligent enough. And he likewise seems not to be considering the opportunities of bias, which accompany its risks.

I’d propose, as I seem quite unable to imagine otherwise, that bias is inherent in subjectivity. And to eliminate bias, we would need to annihilate ourselves.

In fact, that which we value most about our humanity is probably bias. For example, what is love if not bias? Even the most altruistic love seems to me to be a bias toward life and against nihilism, which in itself cannot be somehow proven on strictly logical grounds as the “right” perspective.

So perhaps instead of concerning ourselves quite so much with overcoming bias in general, we might concern ourselves more with reconciling between and among our various biases for mutual benefit. And that benefit should, I suspect, always remain firmly founded in that which we severally and diversely actually desire, according to our biases.

That doesn’t require moral relativism. And it doesn’t require indiscriminate tolerance for biases that oppress others’ harmless biases. There’s much shared value to be attained, given our common environment and commonalities in anatomy.

Bryan proposes that the best way to deal with prevalent human bias is admission. I agree that’s an excellent first step.

He seems to imply that the next best step is work toward overcoming bias, with some non-biased end-state in mind. I disagree with that. I think that’s ultimately nihilistic.

Instead, I’d propose the next best step, after admission, is the hard and ongoing work of reconciliation. It’s perhaps a kind of overcoming bias. But it’s not done by aspiring to a non-biased end-state. It’s done by aspiring to compatible and complementary biased end-states.

Step Two: Improve Ourselves

The second step in Bryan’s plan is that we should improve ourselves – radically. He says, “after recognizing our flawed condition, the next step is to improve it.” This sounds like the step of “repentance” in the transformation process described by Christianity (one of my biases).

And he emphasizes that the most important way to improve ourselves would be to improve our cognition because, by doing so, we can improve everything else. I think that’s mostly right, at least insofar as practical matters are concerned.

Strictly speaking, of course, we cannot improve that which is beyond our individual and collective ability to change. But insofar as we can change anything, it begins with cognition: with trust in the potential, observation about and reasoning on how best to interact with the world in a way to achieve the potential, and action driven by an internal motivation.

It’s all cognition of one form or another. So improving our cognitive capacities should do wonders for improving our world, at least in all the ways we’re empowered to do so – which may eventually be far greater than that which we can do now.

Bryan mentions a few ways that he thinks we should improve our cognition. First, we should become more rational and logical. I agree with that, as long as we don’t understand that to entail becoming less emotional and intuitive. It’s not an either-or.

Second, he says we should overcome our biases. I commented on that above. As mentioned, I think acknowledging and reconciling biases would be a better way to frame and pursue this.

Third, he says we should become less vulnerable to manipulation. I agree. But there’s a fine line between some forms of manipulation and some forms of persuasion. And persuasion can certainly be good.

Fourth, he says we should improve probabilistic thinking. I agree that’s something that would benefit us enormously in all kinds of practical ways.

Fifth, he says we should break free from constrained imaginations. I generally agree with this. But I think it’s important that we recognize it can be taken too far in many ways. There have been many creative serial killers and genocidal maniacs.

And sixth, he says we should liberate ourselves from belief systems that are no longer useful. I fully agree with this, to the extent they are genuinely no longer useful, and with the observation that I cannot ethically assess utility with only my own direct utility in mind.

Bryan mentions several ways to improve cognition. He thinks meditation, supplements, exercise, education, self-help, and therapy are all useful. He’s a supporter of work in psychedelics and entheogens. But he notes that we need to improve our cognition by orders of magnitude, and none of the available means, even in combination, is sufficient.

I agree with this. Of course there’s a risk, here, of undermining practicality and hope. And we should take that risk seriously. While hoping for and working toward greater means, we should continue to celebrate and advocate the means we have available right now.

At this point, Bryan exhibits the problem about which I expressed concern in his assessment of biases. He notes, rightly, that we are limited by our imaginations in our aspirations to improve cognition. But I suspect he may be exemplifying that limitation of imagination when we frames his aspirations in terms of “becoming perfectly logical, rational, and eliminating all blind spots.” He has assumed that’s possible.

More significantly, he has assumed that’s meaningful. He’s expressing a great deal of epistemic humility and open-minded wonder about how we might improve our cognition. But he’s doing it within the scope of an unquestioned assumption that there’s such a thing as perfect logic and totally eliminated bias.

Godel seems like sufficient grounds on which to question the possibility of at least any final or static perfect logic. And, as discussed before, I don’t think total elimination of bias is desirable or even possible without annihilating cognition.

Bryan shares some thoughts on the importance and relevance of AI in relation to human intelligence enhancement. I strongly agree with him on most of this. I trust we can and should and need to enhance human intelligence to complement AI. And I’m a huge fan of Bryan’s work in this area, including Kernel, the business that he has launched.

But I do have one item of clarification or perhaps disagreement to mention here. Bryan says that intelligence is the most powerful and precious resource in existence. I agree that intelligence-in-general, including that of humans and AI and their descendants, may prove to be the most powerful resource in existence. I hope so!

But I think it’s worth noting that intelligence-in-general is just power-in-general. And power-in-general is neither inherently greatest nor inherently precious. Returning to my definition of intelligence, as optimization across contexts for goals, even a flush toilet exhibits a limited amount of intelligence. But we rarely if ever consider our toilets the greatest or the most precious resources.

Certain forms of intelligence, perhaps presently existing beyond our ability to observe and comprehend fully, or perhaps in our future, may become greater than anything we presently have the anatomical capacity to imagine. And yet other forms of intelligence already are and will remain most precious to us. They are that which we love most: perhaps friends and family, and perhaps ourselves. So, in lived estimation, there’s not necessarily a high degree of overlap between the greatest and most precious resources in existence.

Step Three: Change Economic Incentives

Bryan’s third step in his plan for the future of humanity is to change economic incentives, particularly to make humans economically viable. He warns that our current economic incentives are structured to make humans irrelevant as fast as possible.

The ROI on automation and artificial intelligence may be already higher and are surely growing faster than the ROI on human intelligence, Bryan points out. So we should buy more robots, and fire more humans. Then, if the humans are lucky, we can live on a universal basic income while the robots rule. Overall, I agree with Bryan that this is a real and growing, even momentous, risk.

In the context of economic incentive, Bryan presents a criticism of Facebook, which he sees as a preeminent example of the problematic economic incentives. Overall, I agree that Facebook is operating on the problematic incentive model. But Bryan seems to me to be overstating the social consequences when suggesting that Facebook cultivates the “worst versions of ourselves.”

In my experience, I’ve observed some persons behaving worse sometimes because of Facebook and social media. But I’ve also observed some persons behaving better sometimes, also because of social media. It seems too narrow of a criticism only to blame social media for its negative consequences and not to give it credit for anything positive that has come from it.

For example, social media probably has heightened overall global empathy. And I think that’s generally a good thing. So unless Bryan can make a stronger case for the evils of social media, I think his positioning would be stronger if his criticism were more nuanced.

In any case, returning to the broken economic incentive model, Bryan suggests some ways that we might fix it. First, we might incentivize businesses to improve human cognition and not just manipulate us to make money.

This is a challenging distinction. How do we discern legitimate exchange of value (money for cognitive improvement) from manipulation? And to what extent is that not actually happening already? I’m not sure.

I’m hesitant to conclude that we’re not already engaged in at least a significant amount of legitimate exchange. But I’m also hesitant to conclude that it’s possible to know, in advance, that any given proposed exchange of this nature will not prove to be merely manipulative.

There’s a chicken-and-egg problem here. If I need cognitive enhancement, I may be inherently unable to discern whether the offered cognitive enhancement is what I need. And so I’m in a strange position, where I must trust, which arises from some amount of persuasion or manipulation, characterized based on whether and to what extent it’s engaged ethically. And of course that’s a rabbit hole of complexity.

Second, to fix the broken economic incentives, Bryan says we might re-design political systems to make data-driven decisions rather than pander to wealth or well-organized special interests. This too is a challenging distinction. On what data do we base the decisions? Who decides?

Assuming an altruistic response, we must all decide. We must all contribute to the data on which the system makes decisions. Historically, that would have been impossible at large scale. But it seems there may be hope for new systems of governance empowered by technologies such as blockchain, which Bryan mentions.

Third, to fix broken economic incentives, Bryan says we might start to improve humans at a faster rate. This one seems problematically circular.

It seems to be a suggestion that we can increase incentive for improving human intelligence by improving human intelligence. It may indeed be a virtuous cycle. I trust it is. But recognizing that doesn’t, in itself, seem to offer a practical way to change incentives now, when we’re concerned the virtuous cycle hasn’t already begun.

Fourth, and most important from Bryan’s perspective, to fix broken economic incentives, we might each individually take control of our own digital data. This seems to be one of the main drivers for Bryan’s negative attitude toward Facebook.

He observes that Facebook and similar companies are gathering and using data about us as a resource that they own. Instead, he argues, we should change this so that we each own the data about ourselves. Then it would be less likely that the data would be used to exploit us and more likely that it would be used to improve us. And this could introduce the virtuous loop mentioned previously.

I agree with Bryan on this. And I think he’s right to emphasize this point, as I cannot presently imagine another way to improve the economic incentives for human cognitive enhancement that would be more effective and more imminently actionable.

Step Four: Accept Change

The fourth step in Bryan’s plan for the future of humanity is to accept change. And he has a particular change in mind: artificial intelligence.

He suggests that it’s pointless to waste energy on disliking it or working against it. It’s here. And it’s here to stay. So the effective choice, apparently, is only between waiting for it to evolve beyond us, or working to co-evolve with it.

I strongly agree. This is a major driver of my identity as a Transhumanist. Purposeful futures for humanity depend on us using the means available to change not only our world, but also ourselves.

That which doesn’t change, dies and disappears. That which changes, adapting, achieving its goals, setting new ones, and repeating: that is, in a dynamic sense, the ultimate form of intelligence. The difference between great and small intelligence is only a matter of time when the small intelligence demonstrates the capacity to self-improve and the great intelligence does not.

Artificial intelligence is, so far, an expression of human intelligence. I hope we keep it that way, continuing to integrate ourselves with it, co-evolving with it. Otherwise, we’ve reached the end of human history. And artificial intelligence will replace us.

In either case, we’re at a kind of end, or disruption, in the evolution of humanity. Biologically, it’s time to change. We need better brains and bodies. So, as I see it, the effective choice is between the extinction of humanity or the co-evolution of human and machine intelligence into posthumanity.

Step Five: Build a Global Biological Immune System

Bryan’s fifth step in his plan for the future of humanity is to build a global biological immune system. He thinks this should be modeled after or is at least analogous to the digital immune system humanity has built, represented in the Internet, security software and hardware, and innumerable software engineers constantly working to ensure the stability of the system. And he hopes we can do the same for biology, genetics, chemistry, and related materials science. For his part, he has invested $100M of his own money, via his OS Fund venture fund, into companies working in these areas.

Here, too, I agree with Bryan. Software facilitates and expedites integration of our intelligence with and extension of it into our world, generating feedback loops of unprecedented magnitude (at least locally, as I trust far greater intelligences than humans already exist). It’s instrumenting time and space with our will.

Some have said that “software is eating the world.” I think there’s something to that. But I’m not sure “eating” is quite the right metaphor. In any case, as software permeates the physical world, we are presented with the opportunity to build a far more robust physical environment for human survival and thriving, leveraging what we’ve learned from the digital world.

Step Six: Focus on the Right Things

The sixth step in Bryan’s plan for the future of humanity is to focus on the right things: more on the problems that we must solve to survive, and less on the problems that we will probably solve through the “normal course of human innovation, science, and grit.” I agree, in spirit, with Bryan’s distinction between these two kinds of problems.

But I would express the difference and effective attitudes toward them a bit differently. Whereas he mentions early on that we should not work on the second kind of problem, he later seems to contradict himself by acknowledging that we will solve the second kind of problem through “innovation, science, and grit.” Those are all work. So he’s implying that we do need to work on the second kind of problem.

But I think his intended point, and the one I agree with in spirit, is that there are some kinds of problems that we’re already naturally inclined to work on. And there are other kinds of problems that we’re actually naturally inclined to ignore. And some of the problems that we’re naturally inclined to ignore are actually momentous, including global catastrophic risks. So the practical point of Bryan’s warning is that we should more carefully prioritize the challenges we face and more carefully allocate our time and resources in accordance with that more careful prioritization.

That doesn’t mean no one should work on less important problems at all. To the contrary, someone must do such work at least some of the time. But it’s entirely possible to approach any set of problems as a portfolio, allocating weighted time and resources according to assessments of relative concern. So long as a given problem doesn’t reach zero on the relative concern scale, at least some small amount of attention should still be given to it.

Bryan suggests that we need to develop new systems to identify and allocate resources toward global catastrophic risks that are not receiving enough attention. Echoing concerns he expressed earlier about a lack of incentive to improve human cognition, he observes that our current political and economic systems excessively reward problem-solving aimed at short term concerns in general, at the expense of long term problem solving. I think he’s right.

But I’ll note, again, that we wouldn’t want to focus exclusively on long term thinking. That’s a recipe for disaster, subsequent to analysis-paralysis. It’s a lesson we’ve learned in the software world, which for many years relied on large scale long term “waterfall” style planning of projects that ended up wasting time and resources without ever coming to completion.

As we’ve become better at software development, we’ve learned to use more “agile” forms of project management, which aim for more incremental progress and dynamic planning. This shouldn’t be an excuse for a lack of identifying over-arching long term values and risks to attaining them, and allowing that analysis to flow into agile decision-making. But it’s important to keep in mind that there are risks of excess in both directions.

Step Seven: Update Our OS

As step seven in his plan for the future of humanity, Bryan says we should update our OS, which he uses as a metaphor for our belief systems. He says that the anatomy of our brains is ancient, and our belief systems aren’t much better.

And this is important, he observes, because belief systems are the “most powerful technology to facilitate mass human cooperation and also one of our biggest liabilities.” Their power is exhibited in our willingness to fight for them, and even to die for them.

Bryan notes the difficulty he had in leaving Mormonism, the religion within which he was raised. He says it was the hardest thing he’s ever done. This, I think, is an important observation. And I think it points to something beyond what Bryan has fully articulated.

Religion is more than a belief system. And, in function, it’s not even primarily about beliefs. In function, religion is primarily about esthetics: an organized and formalized communal art form. And it has all the power that Bryan ascribes to belief systems, and more.

Indeed, the fact that religions is about more than epistemics, and not even primarily about epistemics, is probably why it’s so difficult to change via epistemics. You can use logic and reason and science to change epistemic problems quite effectively. But you can’t use them effectively to change esthetic problems. So religion often endures, robustly, in the face of epistemic challenges.

To the extent that religion has influenced epistemic matters, science has been effective at modifying religion over time. But this influence has not been unidirectional.

Christianity, notably, has had profound influence on the emergence and trajectory of science. Many of the persons that we regard as celebrity scientists of the Enlightenment were motivated in their research by explicitly religious aims. It was esthetics that provoked and sustained their work. And so long as we don’t fully recognize that, we’ll be confusing ourselves about how best to solve problems that we observe in the world of “beliefs.”

Bryan claims that we need more inclusive belief systems because we’re all in this together on a single planet whose fate affects us all. I agree. But more importantly, to press the importance of the categorical shift I’m advocating, we need more inclusive esthetics! We need powerful symbol systems that move us on emotional levels to love each other and our shared world with greater strenuosity.

Beliefs won’t be enough. They’re just scratching the surface. We have to feel the difference in the depths of our souls.

It’s not just semantics. When we focus on “beliefs” as the source of the problem rather than “values,” and when we focus on epistemics rather than esthetics, we will be inclined to approach problem solving in very different ways. And the epistemic approach isn’t going to take us where we need to go on its own. We must also approach the problem solving in the esthetic category.

Bryan characterizes Judeo-Christian religion as a single-player sport, in which one gains salvation on an individual basis, independent of responsibility or consequences for anyone else. I recognize that many persons interpret and act on Judeo-Christian religion in ways that approximate Bryan’s description. But I think the description is flawed, poorly accounts for the authoritative tradition on which Judeo-Christianity is based, and is ultimately a straw man.

For example, in the Jewish tradition, the prophet Isaiah claims that God will not save anyone, won’t even listen to the prayers of anyone, unless we work to save each other (Isaiah 1). In the Christian tradition, the apostle Paul claims that all members are needed, and none can rightly claim not to have need of another, in the Body of Christ, which is symbolic of the church (1 Corinthians 12). And in the Mormon tradition, the prophet Joseph emphasizes that our own salvation depends on each other’s salvation (D&C 128).

I could of course go on for a long time, elaborating on this subject. But suffice it for now to say that I think human inter-dependency is actually core to the Judeo-Christian religion. And interpretations that ignore or minimize this core are weak interpretations.

Where I think Bryan is right is when he points out that religions, including Judeo-Christianity, tend to foster strong sectarian behavior. The cooperation and inter-dependency they promote is too often limited to persons that share particular interpretations and affiliations.

I wouldn’t agree that such sectarianism is univocally advocated in the Judeo-Christian authoritative tradition. For example, Jesus says and does things that would undermine such sectarianism if taken seriously. But it remains too easy to overlook those aspects of the tradition and focus on aspects that would pit us against each other, demonizing differences, and sometimes provoking violent hostility. Historically, and even presently, that has been and remains incredibly dangerous.

While I disagree with Bryan’s characterization of Judeo-Christian religion as a single-player sport, I strongly agree with his criticism that it’s not, at least so far, proven to be a sufficiently universal sport. As Bryan puts it, “for humans to be successful in the future, we must play a multi-billion person sport.” Except for those who embrace some kind of escapism or nihilism, Bryan’s observation is just plain true. And our religions have not yet done well enough at helping us adapt to that reality.

Bryan says we need new belief systems to help us achieve that greater cooperation. I think he’s partly right and partly wrong. I think we will need both the old and the new, integrated and reinforcing each other, to go where we want to go.

There is strength to be gained from the esthetic of the ancient and abiding, as there is strength to be gained from the esthetic of the new and creative. Neither alone will prove as strong as the two combined. And we can combine them! We must combine them.

To choose one or the other is to choose one bias over another, one desire over another, one person over another, and not for any necessary reason. We can redeem and reconcile while creating. So we must. Or we’re not actually making that better world to which we claim to aspire.

How do I know that? Because the judges of the better world will be the evaluators. And the values of the evaluators, in all their biases, will forever be the criteria for judgment, no matter how intelligent they might become.

Thanks for reading! My work has significant costs in time and money. If you find value in it, please consider supporting me in the following ways:

  1. Comment thoughtfully below.
  2. Share on social media:
  3. Subscribe to my newsletter.
  4. Make a donation:
  5. Check out my sponsors!

Sponsors

Thrivous Thrivous is the human enhancement company. We develop nootropics to enhance cognition and geroprotectors to promote healthy aging. Use code SUPERHUMAN for 50% off your first order at thrivous.com.

Comments