|InfoTracker allows for the rapid comparison of a document against a potentially large collection of writings in order to identify non-chance overlaps in the text. Additionally, the data structure employed by InfoTracker provides a powerful and efficient means for identifying and ranking multi-word key terms in the text corpus.
|The InfoTracker C++ API has been applied in many applications including: plagiarism detection, difference detection, classified document redaction support, automated information retrieval, and multi-word feature selection for text mining. If you are interested in evaluating the InfoTracker API or a specialized application, or in purchasing source code please contact us.
|Competing plagiarism detection tools employ a variety of techniques in an effort to detect document content that may have been reproduced without consent. Some of these systems utilize poorly scaling pair-wise text comparison methods. Other approaches build indices of text samples (e.g., eight word sequences) in order to detect co-derived documents. While these techniques can perform well when content is reused in large chunks, they perform inadequately when plagiarism is more subtle.
Data structures such as suffix trees which allow documents to be comprehensively compared with a stored corpus are recognized as the gold standard with regards to accuracy and speed of comparison. These approaches, however, have largely been dismissed for use in very large scale applications due to the fact that they were previously limited to in-memory operation or involved unreasonable index update costs. InfoTracker breaks new ground in two ways:
|In addition there are distinct advantages to using a client based plagiarism detection tool over one of the many Web based services. The most critical of these is the protection of Intellectual Property. It is widely recognized (see www.EssayScam.org) that an increasing number of Web sites offering plagiarism detection services are simply fronts used to unscrupulously collect essays that are resold through paper mills. Other services, such as TurnItIn, have been criticized for archiving and profiting from submitted works without the knowledge or permission of the authors. A desktop client based solution such as that provided by InfoTracker avoids these concerns.
|Key Term Identification
|It is often valuable to identify a set of terms that characterize or summarize a collection of documents. For instance one might wish to establish useful metadata tags or a topic hierarchy by analyzing a corpus of text. Similarly, key terms form the critical foundation of features used in a variety of text mining / machine learning tasks. InfoTracker presents a unique opportunity to identify and rank multi-word terms that are particularly important to a corpus of text. These multi-word key terms can hold substantially more discriminative power than individual words because they capture the context vital to their meaning. Further, unlike more naïve approaches, InfoTracker is not limited to identifying phrases of a fixed length (bi-grams or any specific n-gram) or based on a defined grammar. Rather, it will identify sequences of words based on their ability to characterize a corpus, regardless of length.
|Consider the following list of the top terms extracted from a small collection of product and service reviews. The ranking of the terms is based on the relative frequency of occurrence of the term within and outside these reviews. This key term identification strategy can be used to summarize document collection or as a preprocessing step for text mining.
|Document Redaction Support
|InfoTracker provides two primary capabilities that can assist document sanitization. First, it maintains a “redaction” memory that captures editing decisions and exploits this knowledge to support future sanitization efforts. Second, it utilizes novel context understanding techniques to reduce the false alarm rates associated with typical ‘dirty word’ checks. Consider the scenario where InfoTracker is being applied to the problem of sanitizing documents related to the 2002 missile strike on a car carrying terrorists in Yemen. An analyst might be told to remove all specific mention of US capabilities and operations in Yemen, including any text that might be used to infer the nature of the attack or potential US-Yemen collaboration in the strike.
|The user may begin with a simple ‘dirty words’ list that InfoTracker uses to identify portions of the document requiring attention. The analyst could then highlight regions to be redacted from within the document’s native application (Microsoft Word in this case). Once the redactions are committed, by the user or a quality control officer, this text is added to the system’s knowledge base and can be used to annotate future documents – providing awareness of previous redaction decisions. In most applications very effective guidance can be provided even after just a couple of documents are processed. As shown in the figure below, InfoTracker highlights meaningful text fragments that occurred in previous redactions (rather than highlighting all, potentially chance, occurrences of common terms and phrases). Similarly, InfoTracker can successfully exploit the redaction memory to understand the contexts in which ‘dirty words’ appear so as to filter out benign instances (such as the final occurrence of ‘CIA’ in this figure). Finally, InfoTracker can provide users with details regarding past redaction decisions through a simple point and click operation.
InfoTracker’s text indexing scheme allows it to efficiently compare all sequences of words (regardless of length) within a document with those stored in a potentially very large document index. By detecting the unique characteristics of shared content (e.g., unusual word sequences or misspellings) InfoTracker can reliably detect meaningful overlaps with the stored text. Some of InfoTracker’s more important attributes and capabilities include:
|The InfoTracker prototype has been developed for Windows XP and heavily tested. InfoTracker’s built-in viewer includes document converters that cover PDF, HTML, and Microsoft Office. Users may also utilize InfoTracker through a Microsoft Word toolbar. We continue to extend the capabilities of InfoTracker and are actively seeking Beta users. Please contact TJ Goan at email@example.com for additional information.
|This work was supported by the ARDAAdvanced Information Assurance program under contract NBCHC030077.
|Goan, T., Fujioka, E., Kaneshiro, R., and Gasch, L., 2006 “Identifying Information Provenance in Support of Intelligence Analysis, Sharing, and Protection,” IEEE International Conference on Intelligence and Security Informatics, ISI 2006, San Diego, CA, USA, May 23-24, 2006, Proceedings. Lecture Notes in Computer Science 3975 Springer 2006.
|Goan, T and Broadhead, M, 2004 Detecting the Misappropriation of Sensitive Information through Bottleneck Monitoring, Workshop on Secure Knowledge Management (SKM 2004).
|Bernstein, Y. and Zobel, J., 2004 A Scalable System for Identifying Co-Derivative Documents, SPIRE 2004: The Eleventh Symposium on String Processing and Information Retrieval, Padova, Italy, pp. 55-67
|Falkenberg, H.C. and Grimsmo, N., 2004 Introduction to string searching and comparison of suffix structures and inverted files. Technical report, Norwegian University of Science and Technology.
|Ferragina, P. and Grossi, R., 1999 The String B-tree: a new data structure for string search in external memory and its applications. Journal of the ACM 46 (1999) 236-280.
|Gusfield, D., 1997 Algorithms on Strings, Trees, and Sequences. Cambridge University Press, 1997.
|Metzler, D., Bernstein, Y., Croft, W.B., Moffat, A., and Zobel, J., 2005 “Similarity Measures for Tracking Information Flow,” ACM 14th Conference on Information and Knowledge Management