Most of us who grew up in the US (and quite probably many outside the US, as well) know of the "Magic 8 Ball." Ask the ball a question, shake it up for a moment, then flip it over: on the glass you'll see the mystical answer to your question.
While amusing, it's not terribly satisfying. It's more interesting, however, when the ball asks you questions. 20Q plays the "twenty questions" game, wherein a person thinks of a specific thing. The sphere asks the questions, and has 20 shots to narrow down the possibilities; the user can answer "yes," "no," "sometimes," "rarely" and a few other relatively ambivalent responses. Nearly every time -- about 8 times out of 10 -- the ball will arrive at the right answer by the time its 20 questions are up.
Impressive, to be sure, and a fun example of the increasing sophistication of artificial intelligence software. 20Q uses a neural network with a million synaptic connections. It's based on 20Q.net, an online twenty questions game; unlike the toy, the online version continues to learn with each new player. After over 16 million queries, the online 20 questions game is startlingly sharp. While nobody would assert that 20Q was at all sentient, its ability to ferret out the right answer via guesses is uncanny. According to an entry on Kevin Kelly's "Cool Tools" list, the online version has 10 million synaptic associations, and well over 10,000 objects that it knows about. At this point, the major factor limiting its continued improvement is the number of people connecting to it who don't speak much English.
The 20Q project, along with ontological knowledge bases like Cyc, demonstrate the importance of broad knowledge about the world for artificial intelligence. The massive database for the Cyc project -- over 300,000 assertions about nearly 50,000 concepts -- makes it possible for Cyc to display a "common sense" understanding of potentially ambiguous situations and natural language phrases. 20Q uses non-hierarchical neural networks and Cyc uses a structured hierarchical knowledge tree, but it's clear that they're both relying on the same underlying philosophy: more information, with more connections, gives better results. I'd be interested in seeing how well a Cyc routine would do against 20Q.
Until that day, 20Q will have to put up with guessing what humans are thinking. Warning: it's easy to find that a couple of hours have passed while trying to stump the computer. Trust me.
Comments (8)
Cool stuff.
I beat it on the first attempt, though.
Posted by Joseph Willemssen | May 9, 2005 9:35 PM
Posted on May 9, 2005 21:35
What were you thinking of when you beat it, Joseph? I suppose that it's possible to fool it by thinking about really esoteric stuff, but then, you fooled it AND made it learn..
Posted by Mikhail Capone | May 9, 2005 11:41 PM
Posted on May 9, 2005 23:41
Yeah, I toasted it on the first try, too, but I chose something really evil: an eigenvector. :) (There's no way a human would've gotten that either.)
I like how when it loses it just gives you this totally random list of other things that stumped it, hoping your thing is similar.
Posted by Jeremy Faludi | May 10, 2005 3:17 AM
Posted on May 10, 2005 03:17
I beat it with parrot. It did come up with love bird on question 18, but that's not what I was going for. Then on 20 it asked if I was thinking of a smurf =). On 24 it came back to just "bird" and I accepted that. Turns out "african parrot" (which literally what I wanted) was listed as a similar object in the databse (along with skink, chinchilla, and a few other oddities).
Problem was, the system didn't agree with me that parrots like to play, that they sometimes hop, sometimes talk, and several other things. Still, the love bird guess was pretty darn close and within 20 questions.
Posted by Stephen A. Fuqua | May 10, 2005 9:53 AM
Posted on May 10, 2005 09:53
The object I thought of was an eye patch. Arrrgh!
The test tells you upfront: "The object you think of should be something that most people would know about, but, never a specific person, place or thing." So keep that in mind.
Posted by Joseph Willemssen | May 10, 2005 11:33 AM
Posted on May 10, 2005 11:33
I though about bok choi. Q20 actually guessed it, but only after 19 questions. One more question, and I would have won!
But anyway, it's a remarkable piece of software.
Posted by Sergiy Grynko | May 10, 2005 1:10 PM
Posted on May 10, 2005 13:10
I beat it with "mushroom," "photocopier," and "parsnip." But it will keep learning. If someone could hybridize this with "Wikipedia," it would be scary.
Posted by David Foley | May 10, 2005 2:40 PM
Posted on May 10, 2005 14:40
I beat it with scrapheap, smile, canal, garage, my friend Rebecca, hinge, badger, and articulated lorry. It won with brick and 'blade of grass'.
The conventional wisdom aspect is fascinating... the fact that other people's responses modify the definitino of a given thing. I'd like to track how a defintion changes over time, have access to the number of inputs and get a sense of how that affects the guesses made by 20Q.
I ran 'hinge' a second time to see if it was any faster the second time round. I expected it to come up with the answer in 10-15 questions. It took 19.
I am going to start feeding it geographical information like placenames - places I know about - so that it becomes a potential source of information. I can imagine doing Statue of Liberty, or Oakland Bay Bridge, or lesser known places like 'Crouch End' or 'Pioneer Courthouse Square'.
Posted by dglp | May 11, 2005 12:46 PM
Posted on May 11, 2005 12:46