I recently came across a document by Margaret A Somerville, which presents "The Case Against 'Same-Sex Marriage'". This is, to date, the best argument I've read from opponents of gay marriage. It accounts for the importance of religious perspectives, but does not argue from them, and instead appeals to secular ethics. Somerville's summary of her argument follows:
"Society needs marriage to establish cultural meaning, symbolism and moral values around the inherently procreative relationship between a man and a woman, and thereby protect that relationship and the children who result from it. That is more necessary than in the past, when alternatives to sexual reproduction were not available. Redefining marriage to include same-sex couples would affect its cultural meaning and function and, in doing so, damage its ability and, thereby, society's capacity, to protect the inherently procreative relationship and the children who result from it, whether those children's' future sexual orientation proves to be homosexual or heterosexual."
Essential to Somerville's argument is her position on the use of reproductive technologies, which she states as follows:
"I believe that a child has a right not to be created from the genetic patrimony of two men or two women, or by cloning, or from multiple genetic parents."
I agree with many of Somerville's concerns and arguments, with the important exception of her perspective on reproductive technologies. Indeed, my perspective on gay marriage arises in no small part from my high esteem for reproductive technologies as endowments from God, enabling a broader set of humanity to participate in the most intimate aspects of the creative process. I have many heterosexual friends that have benefitted from these technologies, and expect this will become increasingly the case, both for heterosexuals and homosexuals. I agree with Somerville that procreation is worthy of deep respect and concern, both individually and communally, and that it does indeed merit special protections. However, I extend this perspective not only to natural procreation, but also to technological procreation. This is not to say that I think there should be no limits to technological procreation; to the contrary, there should certainly be limits, which we need to consider and debate vigorously. However, I see no justification for rejecting the ethical use of procreative technologies based simply on the creators' gender combinations.
In closing, I'll mention a related reflection on Mormon theology. Our scriptures imply that there are at least a couple different procreative methods, one of which is spiritual procreation. If, as I do, you associate things spiritual with information technology, it seems reasonable to consider our advances in reproductive technology to be advances toward spiritual procreation, in emulation of God.
First, I attended a session with PJ Manney on empathy and technology. The session began by focusing primarily on how to promote empathy through video games, by encouraging persons to take on roles that require action other than violence. Then we discussed differences between how film and books promote empathy; the lack of differing details (internal dialog for film or visual stimuli for film) results in differing pathways to empathy. Unfortunately, I wasn't able to make any comments, but had hoped to discuss the importance of creating pathways for empathy. Many of us find it easy to have empathy for experiences that can be viewed, but find ideological empathy much more difficult. Because of that, it is important that those of us who understand competing ideologies work to formulate syncretizations which enable persons of the two sides to empathize with each other.
Second, I attended a session entitled "winning the meme wars", anticipating an attack on religion. As it turned out, although some participants implied attacks on religion, we instead turned the focus to winning by not fighting. This meant different things to different persons. Some took the perspective that we should avoid telling the general population about advances in tech. While that may be appropriate in some cases, I think the more practical path is to seek to create mutual understanding and emphasis of shared values. Divisions lead to disputes, which can become wars, or worse: in a world with increasingly empowered individuals, each one of us could become significant variables in the effort to avoid global catastrophe. We must educate and promote growth and communal respect. The alternatives appear deadly.
After the second session, I ran to the taxi, and I'm now sitting in the San Jose airport finishing up this post. Before closing, I'll add that I had an opportunity to share a hotel room with John Grigg, an MTA member, during my stay in San Jose. I learned a lot about his background, and found yet again a kindred spirit with deep love for life and an optimisitic attitude toward our shared future. It's a pleasure to work with persons like John to advocate for a positive and mutually beneficial relation between religion, science, spirituality and technology.
One of the panel members observed that genetic engineering is almost as accessible as computer programming in the early 80s, when teenagers were able to become involved inexpensively. Another panel member responded skeptically that there are serious risks associated with synthetic life, particularly when introduced to natural environments, and more evidence should be gathered in favor of benefits before proceeding further. On the subject of benefits, other panelists agreed there are risks, but that risk management techniques will come with time. The most immediate benefit of synthetic life will probably be biofuels. Benefits for cardiovascular health, alzheimers and diabetes may arise from products entering human trials soon. The panelists debated the degree of risk associated with use of artificially selected insects, and emphasized the importance of rigorous research and precaution.
The panelists were asked whether persons from non-biology backgrounds could make a difference in the biotech industry. They agreed that there are an increasing number of opportunities for engineers and infotech experts to become involved. However, the field is not yet ready for most persons that require education on the basics of biochemistry. As things become more automated, the wet lab may become less necessary, and access will expand to persons of more diverse backgrounds.
This is an area where I have a lot to learn. Thanks to good teachers, some of my favorite subjects in school were chemistry and biology. I enjoyed the lab experiments, unit conversion exercises and the artificial selection of fruit flies. However, the fields are broad and complex, presenting mind-boggling opportunity and risk.
He began by distinguishing between futurists and forecasters, defining the former as active advocates and the latter as passive observors. He observed that persons looking to the future have a tendency to compress all the exciting things together, but history illustrates that times tend to produce long stretches of dullness. He encouraged questioning of all assumptions. For example, is tech actually converging, or rather is it diverging and producing greater complexity and diversity? Things may turn out other than we think.
He brought up the idea that change trends occur in S curves. There are persons who are surprised by the initial upward inflection point, and there are persons who are surprised by the downward inflection point on the other end of rapid change. He urged cherishing failure. Repeatedly failures may be the flat part of the S curve leading up to the inflection. He also recommended looking back twice as far as we want to predict forward. Rear view mirrors are great forecasting tools if we use them right. Don't look at specifics. Look for patterns. As examples, he pointed to S curves of processing in 80s (personal computers), connectivity through lasers in 90s (world wide wed and dvds), and a presently emerging revolution in sensors (cameras and others coming together to enable robotic automation).
To conclude, he commented that they who think the longest win. He asked the audience whether they took pride in thinking ahead much further than most persons. Many persons raised their hands. In response, he claimed this audience would be wrong to think ourselves the best long term thinkers. Then, in what turned out to be a controversial matter, he claimed that religious fundamentalists are the best long term thinkers. The race today is among those who would think farthest. Religious fundamentalists are winning the race. Persons like Jesus and Buddha set in motion long term sustainable changes. Of course, the non-religious in the audience didn't like this idea.
First, I attended a session on balancing spirituality with technology. It had a lot of potential and several interesting persons attended, but the discussion was turned too often to the discussion leader's marketing of a device intended to stimulate meditative states. One interesting matter I'll note was one person's suggestion that we need not attempt to persuade each other to various spiritual perspectives. I disagrred with him, and expained that our individual spiritual perspectives have far reaching effects in our community and environment. Many of the challenges faced in the world today have arisen from lack of attention to the practical consequence of spiritual and religious world views.
Second, I attended a session on an open source artificial intelligence project: opencog (a google search will probably bring up the project web site). The discussion leader, Ben G, stated that the project is not making an attempt to reproduce human intelligence. Yet, of course, we're not able to make much sense of AI without constant reference to human intelligence. It is revealing that Ben off-handedly described AI as human equivalent or greater intelligence. I see no justification for such a linear perspective on intelligence, and suspect Ben doesn't necessarily subscribe to it when speaking more carefully. Toward the end of the session, Ben demonstrated the code in action, hooked up to a virtual dog learning to fetch and dance in Second Life.
Third, I attended a session with James Hughes and Mike Latorra on the subject of ideas related to their Cyborg Buddha project. They discussed a hypothetical future in which labor is not required as widely as today, and individuals would have they opportunity to dedcidate more time and resources to spiritual pursuits. Attendees made interesting comments about esthetic choices and hedonism. One of the thought provoking questions came from George Dvorsky, who asked why we should not pursue perfect hedonism through neurotech. The problem, in my estimation, is defining hedonism and assessing the extent to which any person will ever be capable of pursuing desire fulfillment without allotting significant time and resources to risk mitigation.
While waiting for the session to begin, I had a conversation with Peter Milford of Parallel Rules. He told me that his interests are in practical near-term applications of the ideas on which the conference is focusing. When he learned that my interests are in the intersection of technology and spirituality, he kindly expressed his disinterest -- and probably assumed I'm nutty. In time, perhaps he'll begin to recognize the practical near-term consequences of the intersection between tech and spirituality. To the extent that he and others do not recognize the practical importance of these matters, we're in for far more division and turmoil than necessary. Cool gadgets will not suffice to fill the spiritual heart of humanity.
The AI panel began by introducing themselves. A prevailing theme in their self-intros was that we live in a special time. That should ring bells for Mormons. It always amuses me when non-religious persons are unaware of their own religiously-analgous (i.e., religious) thinking patterns.
The panelists discussed the question of whether AI tech will result in increased disparity of wealth. For the most part, they expressed a great deal of optimism regarding the opportunities that will present themselves, but were more mixed regarding anticipated actual outcomes.
Next, the panelists expressed their AI tech advice to President-elect Obama. Predictably, they thought the government has been misdirecting funds and efforts, and should work to re-establish leadership in research and development, and education. One panelist mentioned that he'd rather see the bailout trillions go into AI, nano and life extension tech research than to corrupt bankers and incompetent auto manufacturers. I understand the feeling, but I didn't see many economists in the room.
The audience asked panelists whether AIs will have to be empowered politically to bring about the change we suppose them capable of. They mentioned interest in relying on human values, but also talked further about non-human AI coming to power. These sorts of discussions seem always to tend toward professions of trust in the externalized AI messiah. As I wrote about in a recent blog post, I'd like to see more emphasis on an internalized AI messiah: human enhancement. At least we're not advocating a passive posture.
The audience next asked how best to deal with negative portrayals of AI in popular media. The panel mentioned that we should look to provide positive presentations in the media. We'll also continue to see positive applications of the tech, which more and more persons will use from day to day. As this trend proceeds, fears will decrease. I'll add that an extraordinarily important issue to deal with in this area is that of syncretizing principles of technological change into dominant world cultures and value systems. While some atheists take comfort in the increasing secularization of the world, they should note that religion is going to be quite alive and influential for yet a long time to come, and it should not be ignored or snubbed.
The final question led panelists to dicuss misconceptions related to AI. They focused on some persons' estimation that AI is not possible, and mentioned that this mental block is decreasing but yet encouraged by fear of what AI might mean for human nature.
Observing trends in information technology, some researchers conclude that artificial intelligence (AI) will eventually surpass the brightest human minds and take control of its own evolution. Assuming these researchers are correct, it is in our interest to ensure that we design AI to be friendly from the beginning.
Whether AI will surpass the brightest human minds depends, in my estimation, on whether we take into consideration the accelerating enhancement of human minds. As our information technology is becoming more powerful, it is also becoming more intimate. Arguably, a substantial portion of human intelligence is already based in our technology, carried in our pockets and decentralized into the Internet, yet essential to our identities. I expect AI will not surpass human minds, but rather gradually become indistinguishable from them, as human minds become increasingly distinguishable from their ancestors. However, I do think it's worth our time and effort to consider the hypothesis that AI will make a hard break from human minds and surpass them altogether, particularly because I would like to avoid that kind of future. Our posture toward AI should account for human enhancement, which is what computers are, at least for now.
By way of analogy, if AI expectations were prophecies (which they are for those of us who see religion everywhere), I would say that they too often advocate a relatively weak Messianic posture, emphasizing too much the externality of the Messiah. In the Judeo-Christian tradition, the Messiah brings salvation, whether from enemy tribes, from death and hell, or from whatever vexes us. Many wait for the Messiah, passively. Some work toward establishing a context that they suppose will induce the arrival of the Messiah. Others identify with the Messiah, thereby situating themselves psychologically to create the salvation they desire. This last posture, internalizing the Messiah, is the most empowering. Of course, power has its risks. On the one hand, some that identify with the Messiah are rightly considered lunatics. On the other hand, some do not internalize the Messiah at the expense of others, but rather consider it a common good to take full responsibility for our communal salvation. These persons are of the sort that make better worlds. Analogously, while AI expectations usually include a call to avoid a passive posture toward the beneficial power, they too often assume an externalizing posture. They too often lack the sacrament of human enhancement, through which we consume the beneficial power and thereby take on its identity, rather than only working to create a context for it.