Recently appointed Singularity Institute for Artificial Intelligence (SIAI) President Michael Vassar, a hardcore proponent of science and reason, emphasizes the importance of "human rationality" when discussing the future, making clear that SIAI is an "analytical think tank and research organization, not an advocacy group". Vassar says he's apprehensive about a "possible decrease in the quality of debate as the [Singularity] goes mainstream" and that he would find a public backlash against intelligent debate of a Singularity "odd".
Enjoy the candid and insightful interview.
FB: What are your main near-term goals at SIAI?
Put on a 2009 summit and establish a regular schedule of summits on alternating coasts and with a regular format.
Develop a body of technical and popular position papers and analysis that reflect our current views.
Develop software to help interested people to explore the future forecasting consequences of a range of assumptions.
Organize, probably with the Future of Humanity Institute, an essay contest in order to identify novel global catastrophic risks deserving of more serious analysis and drawing attention to the idea of rational treatment of catastrophic possibilities.
Reinvent Enlightenment values by building a better forum than currently exists for rational deliberation and cooperative analysis and decision making.
Most critically, as always, identify and train potential friendly AI researchers.
FB: Has the organization undergone any significant strategic or tactical shifts since you assumed the Executive Director position?
MV: Our efforts to develop a rigorous theory of Friendly Artificial Intelligence will continue, but our public outreach efforts will focus less narrowly on AI and more on the Singularity more generally and on promoting human rationality.
Futurist Thomas Frey of the DaVinci Institute has posted a thought-provoking avatar roadmap detailing an increasingly critical and symbiotic relationship between man and this progeny of ours. Frey argues that this increasing reliance on avatar extensions will change our fundamental values, eventually leading to a great blur of humans and avatars.
Frey: With each generation of avatar, they will become more life-like, growing in realism, pressing the limits of autonomy as we become more and more reliant on them for experiencing the world. The avatar will become an extension of ourselves. The pain that we feel is the same pain that they feel, and vice versa. Like symbiotic twins separated only by a dimension or two, we are destined to become one with our avatars.
Is that a fair frame and likely prediction, or are we already indistinguishable from our technology and environment? Are we destined to merge with our avatars? Are we already avatars generated by Gaiia or the Great Simulator(s)?
With a pair of feature films due for release in 2009, Ray Kurzweil is poised to shotgun the Singularity mega-meme to the mainstrean.
But how will the message and messenger be received? And what effect will Kurzweil's rising star have on associated memes such as accelerating change, transhumanism, extropianism, futurism, AGI and other less extreme Singularity definitions?
If recent Newsweek ("is this the next great leap in human evolution, or just one man's midlife crisis writ large?") and slanted io9 ("the famous futurist's meat brain has made some ludicrously inaccurate predictions") coverage is any indicator, the seeds of a Kurzweil backlash are beginning to sprout -- a social dynamic that probably also extends to technology in general.
Though I'm no proponent of Kurzweil's Strong Singularity school of thought, relegating it to a low-probability event, I do think the man has contributed a great deal to the study of accelerating change and the human condition. I find the aforementioned criticism, and especially the voluminous associated comment threads, superficial and incendiary, not productive. And though I'm not all that surprised about the reaction, I'm a bit worried now that I'm actually witnessing the number of Singularity haters rise, especially because the mentality is likely to extend to the notion of the clearly palpable and verifiable accelerating change occuring in many human-related domains.
Now, if you're going to criticize Kurzweil -- and I think more people should do just that -- it makes more sense to carefully take a go at the definition of the Singularity itself rather than his, frankly, rather safe hardware and computing predictions. But that takes time, commitment to simulating multiple futures, and careful consideration, which means there will be many millions of emotionally anti-tech eager to pan Kurzweil's brand of techno-utopianism and accelerating change rather than engage in rigorous debate.
Like I said, it's not surprising, just scary.
Hopefully the story will end more positively than, say, the tale of Giordano Bruno, advocate of heliocentrism, one of my all-time faves. But alas, if things do turn nasty and all apocalyptic, neo-luddite versus transhuman, then perhaps we'll need Skynet to save us from ourselves after all, thus making Kurzweil's Singularity a twisted self-fulfilling prophecy.
Say it won't be so Ray. Some of us will believe you!
Edging out the forthcoming Singularity Movie (not to be confused with this web version), here comes Transcendent Man, a new documentary film that portrays Ray Kurzweil and his vision of a, well, transcendent future for mankind. The film appears to be packed with star-power and will debut at the Tribeca Film Festival in NYC this April. The trailer below indicates 1) a focus on universal human transcension (a positive expansion of human perspective, imho), and 2) an attempt at objective framing by allowing in some critical voices, but none of the heavy hitters - appears to be mostly straw men. While I am optimistic the film will represent a socially necessary forward push of philosophical futurism, many futurists and I will ultimately judge this work on its analytical and objective qualities. That said, I'm hoping it delivers and eagerly await its broader release (probably via a cable network).
Some choice excerpts from the producer's press release:
It's part of human nature to label, classify, and quantify the world around us. We feel empowered when we're able to create structure and meaning out of our surroundings. Maps have been used for thousands of years to that end; enabling us to plot a course, make informed decisions of paths to take, and decide which trajectory will give us desired results. In today's modern culture, digital media has taken mapping to a whole new level, giving us the ability to visualize our world in 3D, and on a global scale
So what do maps have to do with social change?
Potentially, everything. A map is a tool, and historically those that have the best maps win. Several digital and social media tools are in the process of converging to create unprecedented platforms for sharing information in real-time. Whereas software like Google Earth allowed us to visualize on a macro scale, these new tools map information on local levels. GPS software, location based tracking, souveillance, and geotagging are coming together to produce information-rich maps that can be visualized in both space and time. Powered by social media, a space is being created where real-time maps can be used to empower communities to connect and collaborate instantaneously.
Google's announcement that they are now openly beta-testing "interest-based advertising" confirms that the near-term future of web advertising will involve tapping into your behavior and interest graph.
To date, we have shown ads based mainly on what your interests are at a specific moment. So if you search for [digital camera] on Google, you'll get ads related to digital cameras. ... We think we can make online advertising even more relevant and useful by using additional information about the websites people visit. Today we are launching "interest-based" advertising as a beta test on our partner sites and on YouTube. These ads will associate categories of interest — say sports, gardening, cars, pets — with your browser, based on the types of sites you visit and the pages you view. We may then use those interest categories to show you more relevant text and display ads.
There is no doubt that this will make for a more interesting and valuable advertising experience, while also boosting Google's bottom line by cutting out advertising inefficiencies. It is also clear that allowing Google to pair your behavioral data with your ad click data will open up a new frontier of behavioral data mining that will further fuel the Google system and lead to additional advances in search, understanding online behavioral modes, and advertising strategies.
Of course, the inexorable move to personal data integration (Facebook and Twitter are hard at work on similar initiatives and will be the next to jump into the data+search game - credit card, shopping club, and survey companies have been doing this for years) into one big-ass socio-behavioral graph pushes to the forefront a host of privacy, transparency, data control, and general social issues/questions that have been mustering force.
Juan Enriquez' recent presentation at TED juxtaposes the accelerating world financial crisis against the backdrop of the longer term, more profound changes in robotics, biology and genetics. For the former, he suggests that we work longer before receiving social security and not get too tied up in the current morass that we lose track of the incredible advances in the latter. With regard to this he says we are beginning to evolve into a new species - "Homo Evolutis - Hominids that take direct and deliberate control over the evolution of their species...and others." This is not a new meme though it seems to be gaining traction and is popping up more frequently these days. Our ability to manipulate and integrate technology into our very beings will no doubt be one of the hot button issues of the next decade.
Stating a lack of social focus as a fundamental problem, I recently joined the ranks of those critiquing the tentative Singularity University (SU) curriculum. I found (and still do) the proposed courses to be too hard-tech centric, which is often a critique I level at singularitarians and transhumanists who often seem to project their current selves into a post-Singularity future, thus impairing the visioning of how we get there.
At the same time, I want to clarify that though I do agree with the crux of Jamais Cascio's argument that, "A useful Singularity University ... would be one that dove deeply into the nature of disruption, how society and technology co-evolve, and how we deal with unintended and unanticipated results of our choices," I believe his suggested curriculum goes too far in that it does amount to a "social studies/liberal arts crash course with a future twist" as Brian Wang pointed out in the discussion thread. It's not the sort of thing that will appeal to economic movers and shakers.
Still, I strongly disagree with Wang's assertion that, "The politics, ethics and social matters do have their place but as part of a TED conference or a conference specifically on the risks and issues. Trying to force feed it in this kind of program will not work." My issue being that I believe politics, ethics and social matters to be part of convergent acceleration.
The emphasis needs to be on the manner in which all of these technologies, trends, and issues fit together. (Please follow below the fold for Proposed Curriculum.)
Since the recent appearance by Jeff, Garry and Alvis on The Speculist Blog Talk Radio program (click on Speculist meets MemeBox), I thought they and other Future Bloggers might find the below discussion stemming from a subsequent program of interest.
Phil Bowermaster has a post up at The Speculist examining the state of the progression of human society from it's present structure to one more closely tracking the various expectations stemming from the concept of The Singularity:
[The following is an expanded version of an e-mail I sent to Stephen in response to some reflections he had on our most recent FastForward Radio -- that show with guest Joseph Jackson discussing the possibility of a post-scarcity world. I think Stephen was going to post some additional thoughts, too -- to which I would have added comments -- but time's up!] ...
My issue is more practical. By what means could we possibly get to the kind of society he's describing? The assumption seems to be that it would be the federal government (or the Earth government or -- my fav -- the Committee of Robot Overlords) doing the distributing. But we don't have a working model of how a government can guarantee the material welfare of its population without ripping its economy to shreds and putting individual rights on the back burner. That doesn't mean it can't happen, but Joseph doesn't have a model of how we would get there, or at least he didn't articulate one Wednesday night.
I sympathise with Phil's dilemma, unfortunately Mr. Jackson's lack of specific insight isn't unique to him; nobody really knows how we get to "there" because we still haven't really articulated the starting point for the needed change(s) to progress from with any sort of degree of engineering specificity. It's all well and good to simply proclaim the need for a systemic advancement, but what specific mechanism achieves that to actual advantage - and to whom? It seems a bit solipsistic perhaps, but market pressures actually are the least disruptive mechanism to stimulating that process. This doesn't make for speedy adoption of course, but does assure wide-spread acceptance of the process within the production industry(s) generally once the never-ending search for competitive advantage resorts to such comparatively radical technological innovation. Until business profitability (with it's concomitant influence on tax collections) forces executives and governmental legislators to commit to some technology there will remain resistance to doing so. Despite the potential for individual developments altering the current production structure and economy, the likelihood of such a development actually forcing early change is slight for a variety of reasons - only some of them technologic in nature.
The now-publicized curriculum of Ray Kurzweil's newly launchedSingularity University (SU), a very necessary institution that aims to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges", yet again reveals what I have come to call the Transhumanist Ego Bias (TEB), which results in the Hard-Tech Attribution Error (HTAE) that Jamais Cascio so eloquently describes in his Flunking Out SU critique.
Transhumanist Ego Bias: The TEB is a tendency among transhumanists to force their objective vision of the future to fit with their subjective expectation of the future. Many of the futurists and outright transhumanists that I have come to know and respect over the years suffer from this. (I too came down with it for a spell when I first encountered the awesome power of Moore's Law and other hard-tech diffusion curves.) It's as if they 1) expect the future to create a magical utopia into which they project their unchanged present-day personalities, 2) can't or don't want to credit the dumb masses (their detractors) with the ability to perform amazing operations (social computation) critical to acceleration, and/or 3) are so focused on the post-human age / life-extending digitization that they fail to adequately consider what it will take to get there.
Hard-Tech Attribution Error: It's no accident that brainiac, hardware-focused, early-adopter types who formulated their core outlook prior to the explosion of social media structures like Facebook, Wikipedia and Digg tend to focus on the "hard" sciences in lieu of recently blooming areas such as group intelligence, emotional intelligence, coordination, and communication. The social side of the equation is not as obvious to those that haven't studied it closely, lived it or worked in fields that rely on social networks to make a living. The result is that the social component of acceleration (despite a few courtesy nods to Intelligence Amplification [IA] over the years) is seriously undervalued as a driver.
The Singularity University, which our own Alvis Brigis got an early scoop on, was made official today. The venture has the support of Google, NASA and an All-Star team of the singularity cognoscenti. The announcement received widespread coverage in the media from the likes of Businessweek, AP and Forbes, which demonstrates just how far this meme has come over the years.
I'll never forget a great night owl session at the first Accelerating Change Conference held by John Smart's Accelerating Studies Foundation in 2003 with Ray Kurzweil holding court and about 20 of his most ardent fans (many of whose works I had read) in attendance. Eliezer Yudkowsky, Ben Goertzel, John Smart et al were listening in earnest to what Ray had to say and it was pretty cool. I heard sometime later that it was also a treat for Ray to have been in such an intimate setting with such a knowledgeable and passionate crew.