In almost all classical
Information Retrieval settings that have a text processing component,
stopwords are first discarded before anything interesting happens with the document. “Interesting” here might mean indexing the content for search, extracting features for automatic classification, or some other form of content analysis of whatever flavor.
Jonathan (my co-author on the SpotSigs
paper) had the amazing idea that stopwords may however be very good indicators of the actual interesting parts of a web page. It is especially useful to know where the interesting parts of a web page are when they are interspersed with “added-value” content such as advertisements or navigational banners. This is most strikingly the case with
online news articles, but applies more generally across the web.
In our SpotSigs project, we tried to detect near-duplicate Web pages in the news domain. This is a particularly challenging setting because the page layouts are often literally drenched with ads or navigational banners as added by the different sites. The actual core articles constitute only a minor fraction of the overall page, which makes online news a very hard setting for any unsupervised clustering or deduplication approach. Moreover,
near duplicates of the same core article frequently pop up from different news sites as most of the content gets delivered by the same sources, such as Associated Press, and the very same core articles then often end up completely unchanged (or only after some minor editing) on many of the sites.
In response to this setting, the idea of extracting more “localized” signatures—namely those that are close to stopword occurrences (hence spots)—was born. These localized signatures exploit the observation that stopwords are
frequently and uniformly distributed throughout any form of natural-language text—at least in Western languages—but they remain very infrequent in the typical headline-style banners or ads. SpotSigs connects such stopword anchors (called
antecedents) with a nearby n-gram, which is simply a concatenation of further text tokens that are themselves not a stopword, in a very similar way to
classic Shingling on the entire page content. In choosing only those Shingles that are connected to a stopword antecedent, however, SpotSigs tends to extract
more robust signatures than plain Shingling. At the same time it allows for a
more efficient and
less error-prone signature extraction as compared to many of the far more sophisticated tools for HTML layout analysis. Another nice property is how SpotSigs handles synthetic documents that do not exhibit any natural language text, like 404 documents that crawlers typically encounter. SpotSigs automatically discards such synthetic documents.
As a second focus of the paper, SpotSigs also addresses some algorithmic challenges by tackling the inherent quadratic complexity when having to consider all candidate pairs of documents for the deduplication step. Here, our entire clustering algorithm is developed on the simple idea that we may never need to compare documents (or their respective signature sets) if they already substantially vary in length – at least if some set-resemblance-based similarity measure such as
Jaccard similarity is used. This basic observation lets us very efficiently match high-dimensional signature vectors using a combination of classic techniques such as collection partitioning and inverted index pruning. As a rather surprising result, we found that the SpotSigs matching algorithm may even outperform linear-time similarity hashing approaches like
locality-sensitive hashing in runtime – at least for reasonably high similarity thresholds and if the distribution of document (or signature) lengths throughout the collection is not too skewed – which nicely advocates the old algorithmic paradigm that sorting may sometimes be favorable over hashing. Overall, for a collection of 1.6 million documents from the
TREC WT10g benchmark, we achieve a remarkably fast runtime of only about 5 minutes for parsing and extracting signatures, and less than 15 seconds for the actual clustering step on top of our index structures, already on a single mid-range server machine.
What I personally like about SpotSigs is that all its parts – from the signature extraction, over to the partitioning and index pruning – seamlessly fit together like some seemingly random pieces of a puzzle that finally make a nice picture. For example, the partitioning approach is not only good for breaking down the overall quadratic runtime into many much smaller pieces, but it also helps to smooth the skew toward shorter documents that we typically find in web collections and to provide more balanced partition sizes (the
slides provide a little more detail on this). Similar themes recur throughout the work, for example, the threshold-based pruning approach applied in no less than three variations throughout the entire algorithm – with all steps being based on the very same similarity bound we derive for the (weighted) Jaccard resemblance between two Spot Signature sets.
After the presentation at
SIGIR 2008, we received some interesting comments about the inherently subjective nature of detecting near duplicates. This suggests that for future work, thinking about how to personalize the signature extraction step beyond just using stopword anchors would certainly be an intriguing direction. We'd also like to improve the clustering algorithm by further investigating disk-based index structures, possibly distributing the algorithm onto multiple machines, and extending our bounding approach for more similarity metrics such as the well-know Cosine measure, which is more commonly used in IR than for example Jaccard.
Have a look at the
paper or
slides for more details.
Labels: inverted-index pruning, martin, near-duplicate detection, news articles, partitioning, sigir, sigir08, similarity hashing, stopwords, theobald