Saturday, July 12, 2008

The Grammar Students Guide to Radiohead

Below is an article that I wrote and originally published on the Evri blog. I included it here in its entirety.

--

Here at Evri, we talk a lot about searching less. When we say searching less, we are talking about you, our users with precious time -- we want you to search less -- we aren't talking about our machines, because they do an awful lot of searching so you don't have to. So how are they, our racks and racks of computers, searching so you can understand more?

Well it comes down to teaching our machines to read documents more similar to the way humans do - to basically understand more of the meaning of the documents they index. This is very different from what traditional keyword based search technology does. Typical search engines, when they encounter a document, treat the document like a bag of words -- associations between the words, how they interconnect, and form actual meaning is lost. Consider the following text snippet from a Starpulse article:

Howard insists they won't be copying Radiohead's idea and making their disc only available on the internet. [...] He tells BBC Radio 1, "We won't be doing the same thing as Radiohead, no." [...] Last year, Radiohead released In Rainbows as an Internet download and allowed fans to name their own price for the album.

Now from this snippet of text, your favorite search engine will store this data something like:

Radiohead - 3
Howard - 1
Rainbows - 1
released - 1
Internet - 1

and so on. I'm simplifying things a lot for the sake of discussion, but basically, your favorite search engine is maintaining a list of words, and keeping track of how many times those words appear in a given document. This approach works quite well for finding websites, but not very well for discovering facts, or relationships describing how people, places and things interconnect.

Now consider how Evri's approach is different. For this same snippet of text, our machines will break the snippet out into multiple sentences. For each sentence, our machines will, in essence, diagram the sentence similar to what you did back in 7th grade grammar class. So, for every grammatical clause in a sentence, our system creates a data structure like that shown below.
In the last sentence of the snippet above, our system will store a relationship like:

Radiohead > released > In Rainbows

In addition, our system knows that Radiohead is a band, released is a verb, and In Rainbows is an album. If a sentence said: Radiohead of Oxfordshire may release an album called In Rainbows, our system will store Oxfordshire as the suffix modifer of Radiohead, and will store the verb release as being conditional; knowing that a verb is conditional or negated is important as this information can be used to determine where in a list of results this relationship should appear. In addition, if a subsequent sentence says something like: The band's experiment proved successful., our system will know that The band refers to Radiohead; this is because our system attempts to resolve anaphora similar to the way humans do. Finally, this triplet style data structure is searchable at web scale and web speed by searches expressible in a query language; this query language is quite flexible, but basically allows our recommendation and information navigation applications to formulate effective queries in a precise manner. For example, a query like:

[musical_artist] OR [band] > praise > Radiohead

is being used to render the right column in the entity detail page shown in the screen shot below.
When you actually click on a person or organization, like Billy Corgan, the system will execute a more refined query like:

Billy Corgan > praise > Radiohead

One of the challenges our scientists and engineers face is how to formulate these types of queries in clever ways so you, the user, do not have to; I'll save this discussion for another day, however.

Finally, we published a book chapter last year that does a more thorough job explaining our approach, and additional grammatical treatments our system performs. So if you're interested, see the Natural Language Processing and Text Mining book chapter titled A Case Study in Natural Language Based Web Search.

No comments:

Related Posts with Thumbnails

Liked what you read? Tell your friends

More info about content in my post