Wednesday, July 30, 2008
SIGIR 2008 Trip Report (Posted by Paul Heymann)
Kai-Fu Lee Keynote
Keynotes seem to be pretty hard to get right. One hour is a long time, and they either end up being too vague or focus too much on a specific area of the speaker's interest. However, an even bigger issue for me with industry speakers (at least when the talk is not about a research paper) is that they often seem too worried about giving away secrets about their company. As a result, this means that these sorts of talks can end up as reheated summaries of what all of the researchers know, rather than giving any new insight into the challenges industry is facing. Kai-Fu Lee's keynote mostly avoided these pitfalls, and overall I thought it gave a really good overview of the challenges faced by Google in China.
His talk was more or less in the form of bullet points about Google's strategy in China, so I will take more or less that form here. (Greg Linden also has a blog post about the keynote here.)
- The number of Internet users in China is growing at 35% each year, and accelerating. This means that whoever can capture new users (who presumably have no idea what they are doing) will ultimately be the market leader in China (assuming no outside intervention).
- New users are kind of classless. For new users, result quality in search does not matter and neither does a clean user interface. New users either do not care or cannot yet appreciate these things. Instead, Google is focusing on Internet cafes, entertainment, and music, which users tend to like (also, blended search with video, news, URLs, and other verticals, called "universal search" at Google). These new users look at a clean empty page like google.com and wonder whether the company forgot to finish the rest.
- Google could not get people to spell their name. They tried a number of things until finally just giving up and registering "G.cn".
- Local engineers building new local products (rather than just cursory localization of worldwide products) seems to be key to Google's plans in China. Examples include an expanded Chinese Zeitgeist, a tool for finding holiday greetings to SMS, and various emergency relief efforts.
- An interesting fact is that Chinese is substantially more dense than other languages. This substantially changes user interface design. For example, in English Google News, each entry needs a title, a date, and a snippet. However, in Chinese, the whole summary of the story can fit in the title area, obviating the need for snippets and leading to more stories per page.
- Kai-Fu showed a graph with no axes, which he said related to mobile usage. He said that the iPhone web usage was 50 times higher than the nearest mobile competitor, even in China where iPhones are black market. There has been some reporting of the 50 times higher stat elsewhere. (On a personal note, when using my friends' iPhones, I find typing so inconvenient that I usually end up running to Google because I know that a search for "cnn" will give me the right result and then I do not have to type the full URL. Maybe my iPhone touch typing skills will improve with time.)
- Users love clicking and hate typing due to annoying text input tools. This leads to a Google product to help with text input as well as a focus on (ugly) directories which these users love.
- Piracy in China is high, and Kai-Fu (I could not tell how seriously) said that it had gone down from 99% to 96%. Whether or not this is true, it allowed Kai-Fu to make a little fun of his former employer, saying that a drop from 99% to 96% meant four times the revenues in China.
- Google thinks that freeware authors who add ad software (but not malware) to their products may be a reasonable distribution method in China for software. Of course, this is not without hazards, I think Kai-Fu stated that the average Chinese computer gets reinstalled every four months.
- China has huge broadband penetration, mostly through Internet cafes.
- There are more Internet users in China than the US, and the growth in Chinese users is accelerating.
- The average age of a Chinese Internet user is 25, and this number is dropping. Huge numbers of fifteen year olds are getting online.
Technical Papers
There were a number of technical papers and sessions and sessions that caught my eye.
This being the Stanford InfoBlog, it seems like I should start with the Stanford papers at SIGIR this year. There were three, one by Martin, one by Mengqiu, and one by myself. Martin (of the InfoLab) presented SpotSigs (DOI), which are a duplicate detection technique tuned for news stories and other scenarios where we care about the main text of a page rather than navigational elements.
Mengqiu (of the NLP group) presented models for improving passage based retrieval (DOI), work he did while at CMU with Luo Si (now at Purdue).
My social tag prediction paper (DOI) (with Dan Ramage of the NLP group) focused on how well we can predict tags in social bookmarking systems, and techniques for doing so (including SVM and association rules).
The tag prediction and tag recommendation problems seem to be getting really popular lately. In addition to the ECML/PKDD Discovery Challenge (which I wrote about here), a number of people are working on similar work. The two (DOI) other (DOI) papers in my session looked at tag recommendation (and related problems) as well.
After my talk, I met a number of people who have also been looking at association rules for tags or tag prediction in different contexts. I wish there was a better way to find out who is working on similar work before the work itself is completed!
Later that day, I chatted a little bit with Brian Davison. Brian has been looking at link structure of related pages on the web for many years, for example, in the context of authority propagation, trust propagation, and hypertext classification. One thing I had not realized was that he actually got interested in the link structure of related pages through his dissertation work on prefetching and caching on the web. Previously, I had known his work just in the context of link structure, for example, his work on topical locality in the web (which I think I in turn came upon via Chakrabarti's excellent Mining the Web).
Brian gave an excellent talk (DOI) in the morning (due to one of his student's having visa issues, which seemed fairly common at this conference), though I unfortunately missed his other student's talk (DOI) later that day on their recent work in web page classification.
Unfortunately, my talk session was at the same time as the question answering session, which is a topic I have grown increasingly interested in of late. The three papers there were: 1 (DOI) 2 (DOI) 3 (DOI).
Two papers that had some buzz about them were Pickens et al.'s paper on Collaborative Exploratory Search (DOI) and Liu et al.'s paper on BrowseRank (DOI). The former paper looked at how to work in teams on web search tasks, a topic which is apparently growing increasingly popular in the HCI community these days. The latter paper looked at results ranking based on user browsing behavior from toolbars. I think the Pickens paper ended up winning the Best Paper award.
Lastly, two talks that I found personally interesting were Jonathan Elsas' talk on models for blog feed search (DOI) (slides) and another talk (maybe by Peter Bailey?) on relevance assessment (DOI).
I have been following Jonathan's work for a while now, as he always seems to be doing something interesting with structured, social data. (I finally got to meet him at the conference as well, incidentally.) His talk was mostly about ways to combine both high level and low level structure in blogs, but I was most excited by a somewhat unrelated fact, that they were using Wikipedia for pseudo relevance feedback (PRF). Previous work (DOI) at SIGIR 2007 had looked at this as a possibility, but it was interesting to see both more confirmation that Wikipedia is good for PRF and further mechanisms for using it in that way. Mysteriously, his talk was the third half hour talk in a set of sessions where all of the other sessions had two talks, so he seemed to have the spotlight. In any case, the room was packed.
The talk on relevance assessment was interesting to me because it seemed to be pushing back on a trend which has been happening for a while now. Specifically, in the past few years, there has been a gradual trend towards using extremely cheap sources of labor for creating test collections for evaluating various tasks. For example, some recent work by a former member of the InfoLab looked at Mechanical Turk for evaluating entity resolution (DOI). The relevance assessment paper looked at three types of judges:
- Gold Judges: Created a topic for a particular collection, and are experts in that topic. For instance, a history professor comes up with a topic "items worn by Abraham Lincoln" and judges results as relevant and non-relevant.
- Silver Judges: Did not create the topic, but are an expert in it. For instance, a history professor.
- Bronze Judges: Did not create the topic and are not and expert in it. Just a random user.
That concludes my trip report from Singapore. Apologies for any inaccuracies in the above, I have not had a chance to read many of the papers above in depth yet, so most of my observations are from my vague recollections of the talks. (Do leave a comment if you have any complaints!) Also, if you were at the conference (or were reading the conference papers) and feel like there was important work not discussed here, do leave a comment as well!
Addendum: Yannis pointed me to this other (very recent) paper with more Mechanical Turk evaluation results. Several other people are blogging about SIGIR'08, like Daniel Tunkelang, Pranam Kolari, and Paraic Sheridan. All photos are from the SIGIR'08 website.
Labels: heymann, paul, report, sigir, sigir08, trip, tripreport