Think you’re immune to Google Search? A new effort by the company promises to unearth your embarrassing Elementary School photos, achievements and other data, then incorporate those into the Google brain.
The Retro-Active Quantification Industry, which I believe will grow to a multi-billion $ valuation by 2015, made a big leap forward this week with the release of Google’s News Archive Search.
Many years in the works, the new service/feature allows users to do exactly what it says – search a huge body of archived small-town newspapers that have been scanned into Google’s system, converted from visual to text data using the company’s perfected system (note: they’re also working on a similar but more robust system that will mine text data – t-shirts, street signs, house #s, etc. – from photographs), and then indexed using Google’s world-famous search.
Best of all, Google allows you to view the original scanned images and “browse through them exactly as they were printed—photographs, headlines, articles, advertisements and all”, much like a microfiche in a library basement (remember those?).
Erick Schonfeld, asking
Is Keyword Search About to Hit its Breaking Point?, talks about
Spivack’s view of the future of the web. According to him it lies
ever-more-refined search technologies such as semantic search,
natural language search, and artificial intelligence. A quote:
Keyword search engines return haystacks, but what we really
are looking for are the needles . The problem with keyword search
such as Google’s approach is that only highly cited pages make it
into the top results. You get a huge pile of results, but the page
you want—the “needle” you are looking for—may not be highly cited
by other pages and so it does not appear on the first page. This is
because keyword search engines don’t understand your question, they
just find pages that match the words in your question.
Spivack wants to “do for data what the Web did for documents”
and develop a standard, uniform system for semantic metadata. It’s
the classic “dumb software, smart data” idea. Tagging works to a
degree, but it’s neither uniform nor standard — the same tag can
mean two different things for two different people, and two
different tags can mean the same thing.
That said, the premise underpinning Spivack’s whole argument is
that search will is the correct interface when faced with a world
of exponentially-increasing information. His version of the future
says, “Keyword search will become increasingly inefficient and the
solution is to develop semantically-aware systems that search based
on meaning, rather than content.” (cont.)
A 8.1 magnitude quake rocks the San Francisco Bay Area. The San Francisco side of the Bay Bridge partially collapses, taking some cars returning to the east bay after a night at the bars into the waters below. The new Oakland span, finished less than a year before, weathers the quake with only minor structural damage. The buildings in San Francisco don’t fair as well.
For Harrison Thomas, the only thing he remembered was that the walls were shaking right before the floor of his apartment suddenly disappeared.
Responders on the scene did a quick survey of the scene and deploy snake-like robots to search for survivors. After twelve minutes Harrison Thomas is found wedged between the flooring of the second and third floor. A piece of wood has speared his leg, pinning him in place.
The crew at the scene uses the robots diamond-edged belt saw to carefully saw their way through the wood in order to aid in his removal. A doctor from St. Louis, on call since the disaster, views the proceedings from his local hospital. Seeing Harrison’s body, he determines that a surgery must be made before the rescue crews get to him in order to save his leg.
Morphine is injected into his leg in preparation for the surgery. UV sterilized tools, located inside the snake, are manipulated over a secure wireless connection to repair the neural and vascular damage done to his leg. Hours later, crews finally unearth Harrison and take him to a mobile hospital set up in a warehouse at Pier 5.
Without this technology, he would have lost his leg, possibly even died.
A new consumer-ready system called ViPR , by Evolution Robotics ,
lets machines – from your cell phone to robots – recognize objects.
Already, this software permits consumers to snap photographs, then
to search for image matches, returning relevant image and text
results. In other words, you can take photos of product packaging,
movie art, billboards, etc and instantly get more information
related to the image you have just captured.
Take a look at this demo to see for yourself:
Eventually, such systems will be essential for robots and other
systems that either need to navigate a complex environment or serve
to input and index information.
In the near term it will be interesting to see how software like
ViPR will function in concert with the new image barcodes being
used on billboards in Japan and
in newspapers by Google. Once optimized, such systems will
undoubtedly lead to consumer efficiencies and new behaviors.
nearly 5% of its market value yesterday when Microsoft
announced that it will begin offering rebates to consumers who use
its Live Search
to discover and purchase products. The action marks the beginning
of a new phase in the online battle for our attention which will
gradually return more and more value to the user.
It is significant and a bit surprising that Microsoft, a company
known for squeezing every last bit of value out of its dominant
position in operating systems, and not Google (which is using a
very similar tactic vs. Wikipedia by creating a competitor,
Knol, that returns ad
revenue to contributors)is leading the charge to return capital to
its users. Though I’m sure Google has similar options readily
available (having so much familiarity with revenue splitting via
its AdSense program and development
of Knol) this goes to show the company is confident in its ongoing
development of search and content to react to Microsoft’s moves and
let the market do the talking.
The Main Takeaway: As the value of human attention
allocation continues to rise and more competition essentially
commoditizes current web applications, we can expect that companies
will be forced to either 1) return value directly through revenue
share, 2) return value through a superior product and/or network,or
3) a combination of 1 and 2. We should expect these trends to
transform our web experience over the coming years as search
companies (Google, Microsoft, Yahoo, Fledglings), Semantic Web
Companies (Twine, Adaptive Blue), social media
(Digg, Reddit, Stumble Upon),
social networks (Facebook, MySpace, LinkedIn), prediction markets
social web browsers (Medium, Flock), etc. all try to garner human
Microsoft being forced into the value-added game is a strong
indication that the rise in value of attention allocation is quite
Update: Some thorough and spot-on analysis of the
situation by Michael Arrington
This past June, Google-owned YouTube launched a new way to search for political videos on its YouChoose page:
Using speech recognition technology, the new function allows users to search for videos based on keywords that are spoken in the video. The resulting videos include yellow markers on the play bar to indicate where the keyword is uttered inviting the user to jump to that spot in the video. And if the user mouses over the highlighted area, a small overlay pops up with the phrase that includes the keyword, to provide some context.
Human computation, the basis of which is discovering what tasks humans can do to make computers smarter, may someday be responsible for making computers not only smarter, but significantly smarter than humans.
Human computation has many applications. For example, computers aren’t very good at identifying what appears in an image, but humans are. To make online image searches more accurate, von Ahn developed the ESP game, which led to the creation of Google’s Image Labeler, and finally the compilation of five different games: Games With A Purpose (GWAP.com).
The model of game play works well. The games are fun, foster bonds and competition, and are free. These are all qualities that have attracted high numbers of players – thus, creating a strong effort to make not only image searches more durable, but also bring computers closer to thinking like humans.
The question is, when will all our game playing lead to a smarter computer that no longer needs our help?
In June 2006, von Ahn was invited to the Google campus to give a TechTalk lecture on human computation and brought up some interesting points about the bond and tension between humans and machines:
At one point von Ahn jokes that the interactions he’s created through GWAP could lead to a world similar to the one depicted in The Matrix; that is, one in which machines rule the universe and generate power from human brains.
Although his speculation appeared light-hearted, when I ask von Ahn what he thinks now, he asserts: “I completely believe computers will become every bit as intelligent as humans, possibly even more intelligent. I don’t see why not: the brain is a machine, we just don’t understand how it works yet.”
This baby allows you to navigate all of your content, both locally stored and cloud based, with a 3D immersive application. It also includes a recommendation app that suggests related content (natch).
Visual search and browsing applications are starting to come fast and furious as the days of static, 2D text-based as sole option fade in the rearview mirror. The interface revolution is afoot. Should be a fun ride.
The future is now for see-through technology. Airports around the world are slowly switching to a new type of x-ray machine which allows them to see underneath your clothes. Yes, underneath.
EU and American civil liberty groups are already fighting the implementation at airports in the US and EU for fear that use of the x-ray scanner will become commonplace instead of just for special situations. We wouldn’t strip search everyone that went through the airport, so why do it through technology?
While we all thought the x-ray in Total Recall was pretty awesome, none of us want to experience that level of radiation on a daily basis (plus, I’d rather let people see my skeleton instead of me naked).
Wondering what all of the Alpha hype is about? Here's a dense 10-minute video snippet of the official Wolfram Alpha "computational knowledge engine" unveiling, presented by the mathematician himself, at Harvard's Berkman Center.
I found notable:
the label "computational knowledge engine" - reinfirces that we're moving from the information age to the knowledge age (and fairly quickly)
Alpha's ability to factor in the location of the user submitting the request into computation results
results that begin with a list of assumptions that essentially present your query back to you in more technical terms (an advanced "did you mean this?" feature) which seems to make a great deal of sense when relating to machine data/knowledge, it's like having a conversation about science and establishing basic consensus before venturing complex and potentially unrelated ideas
the program's seemingly robust ability to mix data from different sources to return logically related results
Conclusions: Upon launch, Wolfram Alpha will be a science researcher's dream if it can perform as effectively - for a wide range of queries - as it did in this demo. It'll also serve as a nice accelerative kick in the ass for Google. I can't wait to try this new quantification assistant.