InfoLab Logo Header

« Home | Next » | Next » | Next » | Next » | Next » | Next » | Next » | Next » | Next »

SIGIR 2008 Trip Report (Posted by Paul Heymann)

I just got back from SIGIR'08 in Singapore. I was there to give a talk on my Social Tag Prediction paper, which hopefully I will get around to writing up a full post about sometime soon. In the meantime, I am posting some of my personal reactions to the conference. This is a long post, so feel free to skip down as none of the parts rely on the others.

Kai-Fu Lee Keynote

Keynotes seem to be pretty hard to get right. One hour is a long time, and they either end up being too vague or focus too much on a specific area of the speaker's interest. However, an even bigger issue for me with industry speakers (at least when the talk is not about a research paper) is that they often seem too worried about giving away secrets about their company. As a result, this means that these sorts of talks can end up as reheated summaries of what all of the researchers know, rather than giving any new insight into the challenges industry is facing. Kai-Fu Lee's keynote mostly avoided these pitfalls, and overall I thought it gave a really good overview of the challenges faced by Google in China.

Kai Fu-Lee Keynote

His talk was more or less in the form of bullet points about Google's strategy in China, so I will take more or less that form here. (Greg Linden also has a blog post about the keynote here.)
  • The number of Internet users in China is growing at 35% each year, and accelerating. This means that whoever can capture new users (who presumably have no idea what they are doing) will ultimately be the market leader in China (assuming no outside intervention).
  • New users are kind of classless. For new users, result quality in search does not matter and neither does a clean user interface. New users either do not care or cannot yet appreciate these things. Instead, Google is focusing on Internet cafes, entertainment, and music, which users tend to like (also, blended search with video, news, URLs, and other verticals, called "universal search" at Google). These new users look at a clean empty page like google.com and wonder whether the company forgot to finish the rest.
  • Google could not get people to spell their name. They tried a number of things until finally just giving up and registering "G.cn".
  • Local engineers building new local products (rather than just cursory localization of worldwide products) seems to be key to Google's plans in China. Examples include an expanded Chinese Zeitgeist, a tool for finding holiday greetings to SMS, and various emergency relief efforts.
  • An interesting fact is that Chinese is substantially more dense than other languages. This substantially changes user interface design. For example, in English Google News, each entry needs a title, a date, and a snippet. However, in Chinese, the whole summary of the story can fit in the title area, obviating the need for snippets and leading to more stories per page.
  • Kai-Fu showed a graph with no axes, which he said related to mobile usage. He said that the iPhone web usage was 50 times higher than the nearest mobile competitor, even in China where iPhones are black market. There has been some reporting of the 50 times higher stat elsewhere. (On a personal note, when using my friends' iPhones, I find typing so inconvenient that I usually end up running to Google because I know that a search for "cnn" will give me the right result and then I do not have to type the full URL. Maybe my iPhone touch typing skills will improve with time.)
  • Users love clicking and hate typing due to annoying text input tools. This leads to a Google product to help with text input as well as a focus on (ugly) directories which these users love.
  • Piracy in China is high, and Kai-Fu (I could not tell how seriously) said that it had gone down from 99% to 96%. Whether or not this is true, it allowed Kai-Fu to make a little fun of his former employer, saying that a drop from 99% to 96% meant four times the revenues in China.
  • Google thinks that freeware authors who add ad software (but not malware) to their products may be a reasonable distribution method in China for software. Of course, this is not without hazards, I think Kai-Fu stated that the average Chinese computer gets reinstalled every four months.
  • China has huge broadband penetration, mostly through Internet cafes.
  • There are more Internet users in China than the US, and the growth in Chinese users is accelerating.
  • The average age of a Chinese Internet user is 25, and this number is dropping. Huge numbers of fifteen year olds are getting online.
Overall, Kai-Fu said that all of the recent work of Google China had led to a market share increase from (I think) about 15% to 25% in two years. One thing I was not sure of was whether Kai-Fu was overstating the contribution of this strategy to the 10% increase. He pointed out a few recent deals which I could not really evaluate, like a recent deal with China Telecom which made me wonder if Google had also gotten more politically savvy in China in the past few years. In any case, this was a great talk, and left me at least (though perhaps not seasoned China watchers) with a lot to think about.

Chinese Users Outnumber US Users

Technical Papers


There were a number of technical papers and sessions and sessions that caught my eye.

This being the Stanford InfoBlog, it seems like I should start with the Stanford papers at SIGIR this year. There were three, one by Martin, one by Mengqiu, and one by myself. Martin (of the InfoLab) presented SpotSigs (DOI), which are a duplicate detection technique tuned for news stories and other scenarios where we care about the main text of a page rather than navigational elements.

Martin Presents SpotSigs

Mengqiu (of the NLP group) presented models for improving passage based retrieval (DOI), work he did while at CMU with Luo Si (now at Purdue).

Mengqiu Presents Passage Based Retrieval

My social tag prediction paper (DOI) (with Dan Ramage of the NLP group) focused on how well we can predict tags in social bookmarking systems, and techniques for doing so (including SVM and association rules).

Paul Presents Social Tag Prediction

The tag prediction and tag recommendation problems seem to be getting really popular lately. In addition to the ECML/PKDD Discovery Challenge (which I wrote about here), a number of people are working on similar work. The two (DOI) other (DOI) papers in my session looked at tag recommendation (and related problems) as well.

Real-Time Tag Recommendation

Top-k Querying Social-Tagging Networks

After my talk, I met a number of people who have also been looking at association rules for tags or tag prediction in different contexts. I wish there was a better way to find out who is working on similar work before the work itself is completed!

Later that day, I chatted a little bit with Brian Davison. Brian has been looking at link structure of related pages on the web for many years, for example, in the context of authority propagation, trust propagation, and hypertext classification. One thing I had not realized was that he actually got interested in the link structure of related pages through his dissertation work on prefetching and caching on the web. Previously, I had known his work just in the context of link structure, for example, his work on topical locality in the web (which I think I in turn came upon via Chakrabarti's excellent Mining the Web).

Brian Davison Presents Separate and Inequal

Brian gave an excellent talk (DOI) in the morning (due to one of his student's having visa issues, which seemed fairly common at this conference), though I unfortunately missed his other student's talk (DOI) later that day on their recent work in web page classification.

Classifiers Without Borders

Unfortunately, my talk session was at the same time as the question answering session, which is a topic I have grown increasingly interested in of late. The three papers there were: 1 (DOI) 2 (DOI) 3 (DOI).

Collaborative Exploratory Search

Two papers that had some buzz about them were Pickens et al.'s paper on Collaborative Exploratory Search (DOI) and Liu et al.'s paper on BrowseRank (DOI). The former paper looked at how to work in teams on web search tasks, a topic which is apparently growing increasingly popular in the HCI community these days. The latter paper looked at results ranking based on user browsing behavior from toolbars. I think the Pickens paper ended up winning the Best Paper award.

Lastly, two talks that I found personally interesting were Jonathan Elsas' talk on models for blog feed search (DOI) (slides) and another talk (maybe by Peter Bailey?) on relevance assessment (DOI).

I have been following Jonathan's work for a while now, as he always seems to be doing something interesting with structured, social data. (I finally got to meet him at the conference as well, incidentally.) His talk was mostly about ways to combine both high level and low level structure in blogs, but I was most excited by a somewhat unrelated fact, that they were using Wikipedia for pseudo relevance feedback (PRF). Previous work (DOI) at SIGIR 2007 had looked at this as a possibility, but it was interesting to see both more confirmation that Wikipedia is good for PRF and further mechanisms for using it in that way. Mysteriously, his talk was the third half hour talk in a set of sessions where all of the other sessions had two talks, so he seemed to have the spotlight. In any case, the room was packed.

Jonathan Elsas Presents Blog Feed Search

The talk on relevance assessment was interesting to me because it seemed to be pushing back on a trend which has been happening for a while now. Specifically, in the past few years, there has been a gradual trend towards using extremely cheap sources of labor for creating test collections for evaluating various tasks. For example, some recent work by a former member of the InfoLab looked at Mechanical Turk for evaluating entity resolution (DOI). The relevance assessment paper looked at three types of judges:
  • Gold Judges: Created a topic for a particular collection, and are experts in that topic. For instance, a history professor comes up with a topic "items worn by Abraham Lincoln" and judges results as relevant and non-relevant.
  • Silver Judges: Did not create the topic, but are an expert in it. For instance, a history professor.
  • Bronze Judges: Did not create the topic and are not and expert in it. Just a random user.
What the work found was that while all three types of judges were fine for making broad distinctions between systems, using poor judges could make the distinctions between top performing systems less clear, or even reverse the ordering of top systems in some cases.

That concludes my trip report from Singapore. Apologies for any inaccuracies in the above, I have not had a chance to read many of the papers above in depth yet, so most of my observations are from my vague recollections of the talks. (Do leave a comment if you have any complaints!) Also, if you were at the conference (or were reading the conference papers) and feel like there was important work not discussed here, do leave a comment as well!

Addendum: Yannis pointed me to this other (very recent) paper with more Mechanical Turk evaluation results. Several other people are blogging about SIGIR'08, like Daniel Tunkelang, Pranam Kolari, and Paraic Sheridan. All photos are from the SIGIR'08 website.

Labels: , , , , , ,

  1. Blogger Ioannis Antonellis | July 30, 2008 at 4:22 PM |  

    "I wish there was a better way to find out who is working on similar work before the work itself is completed!"

    That's something blogs should be used for... ;)

  2. Anonymous Anonymous | July 31, 2008 at 2:42 PM |  

    Overall it seems like SIGIR went well beyond its "regular" topics this year. That's good news!

  3. Blogger Paul Heymann | July 31, 2008 at 4:15 PM |  

    Mor:

    Definitely. It's odd because on one hand, SIGIR was more classical than ever: lots of sessions on models and evaluation, for example. On the other hand, the most popular tutorial seemed to be "Web Mining for Search" and many of the more popular talks were about web search and web topics (tagging, Wikipedia, question answering).

    There was some discussion at the Business Meeting (I didn't stay for all of it) about how to encourage more non-incremental, exotic work. The obvious solution to me (though no one stated it at the meeting) is that if you want more exotic work, just accept more exotic work. As a practical matter though, it doesn't seem to me to be too big a problem if SIGIR drags its heels on this: WWW and WSDM (and CHI/CSCW for certain aspects of things like tagging) are already reasonable forums for less classical work.

  4. Anonymous Anonymous | July 31, 2008 at 8:52 PM |  

    Excellend writeup, Paul (and thanks for the coverage)

    -Jon

  5. Anonymous Anonymous | July 31, 2008 at 8:53 PM |  

    er... make that "Excellent"

  6. Blogger Paul Heymann | July 31, 2008 at 9:32 PM |  

    Jon:

    I should probably also link to your write up of the Learning to Rank workshop, note that you'll have future SIGIR coverage at window office, and point to Jeff Dalton's summary post that brings a lot of the current material together. ;-)

  7. Blogger Unknown | August 9, 2008 at 9:30 AM |  

    Great trip report! Thanks!
    And I liked all the photos that went along with it!!
    hector
    (Hector Garcia-Molina)

  8. Blogger Unknown | September 25, 2008 at 10:56 AM |  

    Nice job describing some of the less traditional work. Thanks for this report.
    Marti

leave a response