Avatars and AI

May 29 2008 / by randalc / In association with Future Blogger.net
Category: Culture   Year: 2010   Rating: 10 Hot

Recently Rensselaer Polytechnic Institute created an artificial intelligence program to run within the platform of Second Life. The researchers are studying the interactions that occur with real people through their avatars. The RPI students created the program to maneuver the avatar and understand some fairly straight forward questions, asked in English.

Operators of Second Life don’t seem concerned about synthetic agents lurking in their world. John Lester, Boston operations manager for Linden Lab, said the San Francisco-based company sees a fascinating opportunity for AI to evolve. “I think the real future for this is when people take these AI-controlled avatars and let them free in ‘Second Life,’” Lester said, ” ... let them randomly walk the grid.”

With AI characters within a grid of tens of thousands of active users the social experimentation is nearly limitless. Social scientists can examine certain behaviors and even provoke them through the AI interface. Most interesting is if the AI can recognize and then smoothly translate languages the program could create cultural bridges and even examine cultural behavior proclivities.

AI Pioneer Peter Voss: Human Level AI in 5-10 Years

March 03 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: 2013   Rating: 8

For the better part of two decades Peter Voss has been hard at work developing what he hopes will be the world’s first funtional Artificial General Intelligence (AGI). His company, Adaptive AI, believes that with the right amount of man-power this goal is well within reach, and far earlier than you may think is possible.

“Personally, I would be surprised if it’s more than ten years before we have human level, or effective AGI, and I think it could be quite a bit less than that, as little as five years,” predicts Voss in his recent audio interview with Future Blogger.

Any such breakthrough would indeed be a game changer, transforming almost every existing industry. Voss is particularly excited about just this sort of cascade.

“AGI will allow us to accelerate nanotech development, medical research, that will allow us to deal much better with all sorts of problems, of course disease and aging, but also just reduce the cost of production of all sorts of goods and foods very dramatically and also helped with environmental issues so there will be a snowballing effect started by AGI development,” he argues, then qualifies as any careful futurist should, “In terms of what will happen and in what year and what chain of events, I have no way of really putting any more numbers on that.

“Once we have machines that are as smart as humans and we can employ them to help us develop other technologies I think things will happen quite quickly. ... You can do a lot of simulations but ultimately they have to be tested in real humans and that takes time. So it’s very difficult to predict the interaction between those various dynamics.”

When asked if he sees this as an industry that can produce a trillion dollar company inside of 10 years, Voss’ unhesitating response is a simple, “Yes.”

Continue Reading

Norvig, Omohundro, Goertzel and Pell Say How They'd Advise Obama's if Appointed U.S. CTO

November 15 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: General   Rating: 7 Hot

Live-blogging from Convergence 2008.

Moderator Jonas Lamis just asked the distinguished AI Panel what they would advise the new Obama administration to do if, by chance, each was appointed national CTO?

Google’s Peter Norvig: First advice, “Don’t choose me.” (Audience laughs.) Most important advice is to do what the President-Elect is already doing. #1: Believe in reality. The next thing is to invest in R&D. It’s important to re-establish the United States as a leader there. We’ve slipped over the last 8 years or so interms of funding research.

Steve Omohundro: Imprtant to use tech to make better decisions in our society. This is a huge opportunity for aggregating beliefs and desires of voters. Through semantic consensus we could better express nuances. The bailout is the perfect example – 99 to 1 against bailout, ended up passing it. Morphing as we speak… Potential pathways as we move to the future – now a smattering of diff orgs – better to have country-wide analysis of this future pathway.

Ben Goertzel of SIAI:

Continue Reading

Exponential Human IQ Increase, Are We Living It?

March 06 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: General   Rating: 6

The Flynn Effect is a fascinating observation that average human IQ has been rising steadily since the invention of tests that measure it. It’s possible that it has been caused directly or indirectly by increased access to information, technology and human networks. If that’s the case, and the trend in human IQ is pegged to trends in these areas, then it’s also possible that we’re about to get a heck of a lot smarter in a very short span of time. Perhaps even exponentially smarter.

Ray Kurzweil has shown that technology is increasing at an exponential, or even double-exponential rate. A Berkeley study and a report by IDC both have confirmed that the amount of information on Earth is growing at an exponential rate. It is clear that advances in communication technology are facilitating an explosion in the rate of communication between people, thus increasing the value of the whole according to Metcalfe’s Network Law .

It’s undeniable that these accelerating trends have had a profound impact on social behavior, in particular our ability to solve ever more complex problems. If you don’t believe me, simply take a look at how quickly a person or a group can locate information, bounce it off of others and output that as a rich white paper, business strategy or more advanced technology—then imagine how difficult that same task would have been minus the internet, huge bodies of amassed knowledge and an environment chock full of complex and inspirational solutions to diverse problem sets.

Human brains are not closed systems. They are constantly learning better ways to input, sort and output information (ultimately this manifests as culture). In order to increase their intelligence, they must encounter information, technology and interact with other humans. It has been shown that children raised sans society are beyond dysfunctional, and that humans who miss critical periods for learning things as simple as counting from 1 to 10 or certain ways of looking at time cannot regain those abilities once the developmental windows close. This indicates that there is a strong relationship between access to information + technology and human intelligence.

But just how strong is the link? Will humans get smarter faster or is there a cut-off point after which technology and information systems speed off into a phase place where we cannot follow? Obviously, these are questions with far reaching consequences. The answers will determine how we evolve, the likelihood of our survival and/or expansion, whether AI or IA is the future, and if a singularity is possible, impossible or desirable.

The more critical the human-tech-info symbiosis, the more likely it is that the Flynn Effect will continue and translate into exponential growth of our own intelligence parallel to these other trends (auto-catalytically), rather than subsequently and as a by-product of them.

Continue Reading

Gadgets that Nag

September 05 2008 / by John Heylin / In association with Future Blogger.net
Category: Technology   Year: General   Rating: 6 Hot

To what extent are we going to let technology run our lives? I can understand wanting the Internet, a cellphone, even a bazillion-inch flat screen TV. But this latest gadget to come on the market, the iPosture, which screams at you whenever you sit in a hunched position, well, it’s just plain silly.

If you thought your parents were nagging you pretty hard at the dinner table, imagine a device that watches your every move (“beep I saw you hide your spinach in the napkin, eat it or no dessert beep“) without the ability to judge when it’s over-stretching its boundaries. Scores of children would grow up hating both the device and their parents, wishing they had received more attention from them, swearing not to raise their kids the same way.

Sure, most people won’t buy these products (at least in the near-future) since it seems so insane and counter-natural, but what about those few who will? For example, parents who think their own parenting techniques are faulty may well wish for a family butler that can help teach their children proper manners. Just imagine if Calvin of Calvin and Hobbes had his own personal assistant, or had been forced to do his homework by an ever-watching guardian…

Continue Reading

Nova Spivack on the Future of the Semantic Web and Machine Intelligence

April 16 2008 / by Marisa Vitols / In association with Future Blogger.net
Category: Technology   Year: 2008   Rating: 5 Hot

A friend forwarded me this awesome short interview of Nova Spivack, founder of EarthWeb in 1994 and Radar Networks in 2003 (which just launched the much-hyped app Twine), in which he discusses predictions for the coming year and the longer term.

Spivack’s prognostications largely focus on widespread adoption of the semantic web. He believes the semantic web will enable the broader web’s evolution to one big database via linked metadata, and that Facebook is slowly becoming a search engine to compete with Google, while Google is becoming a social network to compete with Facebook.

In the longer term, by 2020, “[W]e will move toward an intelligent web where the web moves from a just knowledge base to a kind of global mind – an intelligent entity comprised of billions of pieces of software and billions of people working together to make some new form of intelligence that transcends human or machine intelligence on its own.”

Spivack also points out that he disagrees with Ray Kurzweil on the fundamental roles humans and machines will play in the coming decades.


Learning from the Future with Nova Spivack from Maarten on Vimeo.

(via RapidStage by Maarten Lens-FitzGerald and shout-out to David for forwarding me the awesome video!)

How Exactly Will Our System Get Smarter?

July 25 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: Beyond   Rating: 5

A favorite debate topic for many futurists, humanists, advanced defense theorists, sci-fi authors and Future Bloggers is the nature of future terrestrial intelligence increase. As change accelerates, how how exactly will we and/or the system around us get smarter?

The most popular scenario by far is Artificial General Intelligence , aka AI that equals or surpasses that of humanity, probably because it is the most immediately relatable and due to the fact that so much money is being poured into AGI research. In fact, some researchers are predicting a breakthrough in the field in just 5-10 years.

But there are a variety of other scenarios that could either outcompete this paradigm or conspire with it to accelerate intelligence in our system. These include human-based, alien-based, deeply systemic, or even exo-systemic possibilities.

Applying your particular brand of intelligence, which of the following do you think is the most optimal path to intelligence increase in the acceleration era? (Survey at end of post)

AGI: Human-generated machine intelligence such as in the films 2001: A Space Odyssey and A.I..

Individual Intelligence Amplification: Individual humans that grow vastly smarter due to hard, biological and/or soft cognitive upgrades, such as Bean in Ender’s Game.

Social Intelligence Amplification: A group or humanity as a whole collectively grows smarter, thus taking on the stewardship role for our Earth and species.

Biological Intelligence Amplification: One, more or all of the other species on Earth evolve or emerge, aided or automatically, the foremost intelligence on the planet. This could be viewed as a Gaiian awakening.

Alien Contact: Through efforts like SETI or those of the aliens themselves, we come into contact with some extra-terrestrial intelligence based in our universe that either stewards us or gives us a nice boost, a la the Vulcans in Star Trek, although this would likely be considerably more extreme.

Continue Reading

Kevin Kelly's Singularity Critique is Sound and Rooted in Systems Understanding

October 01 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Environment   Year: General   Rating: 1

The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:

1) A Strong-AI singularity is unlikely to emerge before Google does it first.

“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

I agree that powerful intelligence far more likely to emerge as a property of the global brain and body in co-evolution with accelerating information growth than in a lab.

More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.

So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)

2) The Singularity recedes as we develop new abilities.

Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”

This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.

While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.

(For more on this, check out this Future Blogger post by Will.)

3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.

“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”

Continue Reading