Recently appointed Singularity Institute for Artificial Intelligence (SIAI) President Michael Vassar, a hardcore proponent of science and reason, emphasizes the importance of "human rationality" when discussing the future, making clear that SIAI is an "analytical think tank and research organization, not an advocacy group". Vassar says he's apprehensive about a "possible decrease in the quality of debate as the [Singularity] goes mainstream" and that he would find a public backlash against intelligent debate of a Singularity "odd".
Enjoy the candid and insightful interview.
FB: What are your main near-term goals at SIAI?
Put on a 2009 summit and establish a regular schedule of summits on alternating coasts and with a regular format.
Develop a body of technical and popular position papers and analysis that reflect our current views.
Develop software to help interested people to explore the future forecasting consequences of a range of assumptions.
Organize, probably with the Future of Humanity Institute, an essay contest in order to identify novel global catastrophic risks deserving of more serious analysis and drawing attention to the idea of rational treatment of catastrophic possibilities.
Reinvent Enlightenment values by building a better forum than currently exists for rational deliberation and cooperative analysis and decision making.
Most critically, as always, identify and train potential friendly AI researchers.
FB: Has the organization undergone any significant strategic or tactical shifts since you assumed the Executive Director position?
MV: Our efforts to develop a rigorous theory of Friendly Artificial Intelligence will continue, but our public outreach efforts will focus less narrowly on AI and more on the Singularity more generally and on promoting human rationality.
Futurist Thomas Frey of the DaVinci Institute has posted a thought-provoking avatar roadmap detailing an increasingly critical and symbiotic relationship between man and this progeny of ours. Frey argues that this increasing reliance on avatar extensions will change our fundamental values, eventually leading to a great blur of humans and avatars.
Frey: With each generation of avatar, they will become more life-like, growing in realism, pressing the limits of autonomy as we become more and more reliant on them for experiencing the world. The avatar will become an extension of ourselves. The pain that we feel is the same pain that they feel, and vice versa. Like symbiotic twins separated only by a dimension or two, we are destined to become one with our avatars.
Is that a fair frame and likely prediction, or are we already indistinguishable from our technology and environment? Are we destined to merge with our avatars? Are we already avatars generated by Gaiia or the Great Simulator(s)?
With a pair of feature films due for release in 2009, Ray Kurzweil is poised to shotgun the Singularity mega-meme to the mainstrean.
But how will the message and messenger be received? And what effect will Kurzweil's rising star have on associated memes such as accelerating change, transhumanism, extropianism, futurism, AGI and other less extreme Singularity definitions?
If recent Newsweek ("is this the next great leap in human evolution, or just one man's midlife crisis writ large?") and slanted io9 ("the famous futurist's meat brain has made some ludicrously inaccurate predictions") coverage is any indicator, the seeds of a Kurzweil backlash are beginning to sprout -- a social dynamic that probably also extends to technology in general.
Though I'm no proponent of Kurzweil's Strong Singularity school of thought, relegating it to a low-probability event, I do think the man has contributed a great deal to the study of accelerating change and the human condition. I find the aforementioned criticism, and especially the voluminous associated comment threads, superficial and incendiary, not productive. And though I'm not all that surprised about the reaction, I'm a bit worried now that I'm actually witnessing the number of Singularity haters rise, especially because the mentality is likely to extend to the notion of the clearly palpable and verifiable accelerating change occuring in many human-related domains.
Now, if you're going to criticize Kurzweil -- and I think more people should do just that -- it makes more sense to carefully take a go at the definition of the Singularity itself rather than his, frankly, rather safe hardware and computing predictions. But that takes time, commitment to simulating multiple futures, and careful consideration, which means there will be many millions of emotionally anti-tech eager to pan Kurzweil's brand of techno-utopianism and accelerating change rather than engage in rigorous debate.
Like I said, it's not surprising, just scary.
Hopefully the story will end more positively than, say, the tale of Giordano Bruno, advocate of heliocentrism, one of my all-time faves. But alas, if things do turn nasty and all apocalyptic, neo-luddite versus transhuman, then perhaps we'll need Skynet to save us from ourselves after all, thus making Kurzweil's Singularity a twisted self-fulfilling prophecy.
Say it won't be so Ray. Some of us will believe you!
Wondering what all of the Alpha hype is about? Here's a dense 10-minute video snippet of the official Wolfram Alpha "computational knowledge engine" unveiling, presented by the mathematician himself, at Harvard's Berkman Center.
I found notable:
the label "computational knowledge engine" - reinfirces that we're moving from the information age to the knowledge age (and fairly quickly)
Alpha's ability to factor in the location of the user submitting the request into computation results
results that begin with a list of assumptions that essentially present your query back to you in more technical terms (an advanced "did you mean this?" feature) which seems to make a great deal of sense when relating to machine data/knowledge, it's like having a conversation about science and establishing basic consensus before venturing complex and potentially unrelated ideas
the program's seemingly robust ability to mix data from different sources to return logically related results
Conclusions: Upon launch, Wolfram Alpha will be a science researcher's dream if it can perform as effectively - for a wide range of queries - as it did in this demo. It'll also serve as a nice accelerative kick in the ass for Google. I can't wait to try this new quantification assistant.
Is IBM gearing up to compete with Wolfram Alpha in the computational search game? Maybe. Is IBM gearing to take on the top minds on popular TV game show Jeopardy? Definitely. Check out this video from Big Blue:
Developments such as this have got me thinking about not just the computational search just over the horizon, but also the rise of qualitative search that futurist Paul Saffo mysteriously alluded to in this MemeBox interview.
We've already seen thought-controlled avatars, so it comes as no surprise that robotics represents a new frontier for brain computer interfaces (BCIs). Still, the following video of a human controlling Honda's Asimo via BCI marks a profound socio-technological development, offering a glimpse into the future of work, entertainment and security:
Isn't it interesting that this didn't make its way through national media channels? Just a few years ago human-BCI-controlled robotics would have been perceived as revolutionary.
Astrophysicist Alan Boss believes Nasa's Kepler Mission will turn up "hundreds of Earth-like planets", many of which will probably be "inhabited with something."
Considered a leader in the search for planets outside our solar system, Alan Boss says we are at a turning point in our search for extraterrestrial life. He expects we are on the verge of finding many different Earth-like planets across the universe, and he expects it will be common to find life on those planets. He shares his ideas for how the United States can be on the forefront of the next great discovery: life on another planet.
It's rare that a broadly disruptive, industry shattering/accelerating technology sneaks up on you, much less everyone else all at the same time. But according to Dean Takahashi at VentureBeat, a Gaming as a Service (GaaS) company called OnLive appears poised to launch services that will enable much more robust applications (the current focus is on video games) to be retrieved from the cloud in real-time.
The secret? A new form of robust digital compression that requires just one megabyte of additional software on the web client end.
For years, decades, data compression has formed a frustrating bottleneck for the development and diffusion of not only rich video games, but also more broadly important communication technologies such as virtual worlds (Second Life, Multiverse, VastPark), mirror worlds (Google Earth, Open Street Map) and high definition streaming Web TV (You Tube HD, Hulu) - just to name a few. A breakthrough in compresssion of this magnitutude (which Takahashi says owes its thanks to the discovery of smarter algorithms) is tantamount to throwing more broadband piping at the web and could result in 1) massive acceleration of VW, MW and WebTV adoption, 2) increases in the resolution of these Cloud-based systems.
Iow, it's a big freaking deal.
DISRUPTIVE POTENTIAL: Stated super-compression could/will quickly put a damper on industries such as thin client web browser development, used video game sales, and non-rich virtual worlds. It could/will quickly enbolden virtual video editing, online collaborative Photoshop, robust distance meetings/conferences/lectures, online video game sales (the main thrust of OnLive's efforts), graphically richer websites, and cloud computing efforts in general.
President Barack Obama's video/web overture to the Iranian people marks not only a strategic shift in U.S. policy toward the country, but also a fundamental change in tactics better-suited for an increasingly connected world.
Now let's see how Iranian leaders Mahmoud Ahmanadinejad and the Ayotollah respond.
New cognitive research may help explain why human social systems prefer to push the envelope, creating critical "perfect storm" situations, instead of settling into equilibrium.
If the global social brain is really just a scaled-up version of the individual brain, which in turn can also be viewed as an accelerator of existing bio-computional processes, then we should expect to uncover increasingly more parallels between individual and social cognition. One such candidate is the phenomenon called Self-Organized Criticality, a form of inherent "brinkmanship" routinely found in advancing systems, particularly as they approach phase transitions.
Here's the more robust Wikipedia definition and links:
A new U.K. study confirms that human brains do in fact rely on self-organized criticality for behaviors that may range from perception to action, reports World Science:
The researchers used brain imaging techniques to measure dynamic changes in the synchronization of activity between different regions of the functional network in the human brain. They also investigated the synchronization of activity in computational models, and found that the “dynamic profile” they had identified in the brain was exactly reflected in the models.
Computational networks showing these characteristics have also been shown to have the best memory and information-processing capacity, researchers say: critical systems can respond quickly and extensively to small changes in their inputs.