Convergence 08 Saturday Unconference
15 November 2008 (updated 12 March 2019)
This afternoon, I’ve been bouncing around among unconference sessions. First, I attended a session on balancing spirituality with technology. It had a lot of potential and several interesting persons attended, but the discussion was turned too often to the discussion leader’s marketing of a device intended to stimulate meditative states. One interesting matter I’ll note was one person’s suggestion that we need not attempt to persuade each other to various spiritual perspectives. I disagrred with him, and expained that our individual spiritual perspectives have far reaching effects in our community and environment. Many of the challenges faced in the world today have arisen from lack of attention to the practical consequence of spiritual and religious world views.
Second, I attended a session on an open source artificial intelligence project: opencog (a google search will probably bring up the project web site). The discussion leader, Ben G, stated that the project is not making an attempt to reproduce human intelligence. Yet, of course, we’re not able to make much sense of AI without constant reference to human intelligence. It is revealing that Ben off-handedly described AI as human equivalent or greater intelligence. I see no justification for such a linear perspective on intelligence, and suspect Ben doesn’t necessarily subscribe to it when speaking more carefully. Toward the end of the session, Ben demonstrated the code in action, hooked up to a virtual dog learning to fetch and dance in Second Life.
Third, I attended a session with James Hughes and Mike Latorra on the subject of ideas related to their Cyborg Buddha project. They discussed a hypothetical future in which labor is not required as widely as today, and individuals would have they opportunity to dedcidate more time and resources to spiritual pursuits. Attendees made interesting comments about esthetic choices and hedonism. One of the thought provoking questions came from George Dvorsky, who asked why we should not pursue perfect hedonism through neurotech. The problem, in my estimation, is defining hedonism and assessing the extent to which any person will ever be capable of pursuing desire fulfillment without allotting significant time and resources to risk mitigation.