Content similarity detection: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No, citations of primary sources to establish the validity of items in a list are acceptable. These were not EL. They are refs. Undid revision 445812304 by Tedickey (talk)
tag - awai
Line 1: Line 1:
{{cleanup|date=December 2010}}
{{cleanup|date=December 2010}}
{{link farm}}
'''Plagiarism detection''' is the process of locating instances of [[plagiarism]] within a work or document. The widespread use of computers and the advent of the Internet has made it easier to plagiarize the work of others. Most cases of plagiarism are found in academia, where documents are typically essays or reports. However, plagiarism can be found in virtually any field, including scientific papers, art designs, and source code.
'''Plagiarism detection''' is the process of locating instances of [[plagiarism]] within a work or document. The widespread use of computers and the advent of the Internet has made it easier to plagiarize the work of others. Most cases of plagiarism are found in academia, where documents are typically essays or reports. However, plagiarism can be found in virtually any field, including scientific papers, art designs, and source code.



Revision as of 13:54, 20 August 2011

Plagiarism detection is the process of locating instances of plagiarism within a work or document. The widespread use of computers and the advent of the Internet has made it easier to plagiarize the work of others. Most cases of plagiarism are found in academia, where documents are typically essays or reports. However, plagiarism can be found in virtually any field, including scientific papers, art designs, and source code.

Detection can be either manual or computer-assisted. Manual detection requires substantial effort and excellent memory, and is impractical in cases where too many documents must be compared, or original documents are not available for comparison. Computer-assisted detection allows vast collections of documents to be compared to each other, making successful detection much more likely.

Use of search engines

An internet search engine can be used to look for certain keywords or key sentences from a suspected document on the World Wide Web. This method can be highly effective when used on small and characteristic fragments, for instance a poem or a poetic translation. Although it can easily detect blatant cases, it is less effective when the plagiarizer has mixed multiple small fragments from different sources, and will not return any relevant results if the search engine has not indexed the original source or sources. Also, considerable effort is required to investigate each suspected case.

Plagiarism detection systems

A plagiarism detection system compares suspect documents to a large collection (corpus) of other documents and attempts to match parts of the suspect document to parts of those in the corpus. As with search engines, plagiarism can only be detected if the corpus contains the original source of the plagiarized text.

Academic text-document plagiarism

General design of academic plagiarism detection systems geared for text documents include a number of factors:

Factor Description and alternatives
Scope of search In the public internet, using search engines / Institutional databases / Local, system-specific database.
Analysis time Delay between the time a document is submitted and the time when results are made available.
Document capacity / Batch processing Number of documents the system can process per unit of time.
Check intensity How often and for which types of document fragments (paragraphs, sentences, fixed-length word sequences) does the system query external resources, such as search engines.
Comparison algorithm type The algorithms that define the way the system uses to compare documents against each other.
Precision and Recall Number of documents correctly flagged as plagiarized compared to the total number of flagged documents, and to the total number of documents that were actually plagiarized. High precision means that few false positives were found, and high recall means that few false negatives were left undetected.

Most large-scale plagiarism detection systems use large, internal databases (in addition to other resources) that grow with each additional document submitted for analysis. However, this feature is considered by some as a violation of student copyright.

Academic text-document plagiarism systems

The following systems are all web-based, and, with the exception of CopyTracker, closed source. The following list is non-exhaustive:

Academic program plagiarism

Plagiarism in computer code is also frequent, and requires different tools than those found in textual document plagiarism. Significant research has been dedicated to academic source-code plagiarism.[18]

A distinctive aspect of source-code plagiarism is that there are no essay mills, such as can be found in traditional plagiarism. Since most programming assignments expect students to write programs with very specific requirements, it is very difficult to find existing programs that meet them. Since integrating external code is often harder than writing it from scratch, most plagiarizing students choose to do so from their peers.

According to Roy and Cordy,[19] source-code similarity detection algorithms can be classified as based on either

  • Strings – look for exact textual matches of segments, for instance five-word runs. Fast, but can be confused by renaming identifiers.
  • Tokens – as with strings, but using a lexer to convert the program into tokens first. This discards whitespace, comments, and identifier names, making the system more robust to simple text replacements. Most academic plagiarism detection systems work at this level, using different algorithms to measure the similarity between token sequences.
  • Parse Trees – build and compare parse trees. This allows higher-level similarities to be detected. For instance, tree comparison can normalize conditional statements, and detect equivalent constructs as similar to each other.
  • Program Dependency Graphs (PDGs) – a PDG captures the actual flow of control in a program, and allows much higher-level equivalences to be located, at a greater expense in complexity and calculation time.
  • Metrics – metrics capture 'scores' of code segments according to certain criteria; for instance, "the number of loops and conditionals", or "the number of different variables used". Metrics are simple to calculate and can be compared quickly, but can also lead to false positives: two fragments with the same scores on a set of metrics may do entirely different things.
  • Hybrid approaches – for instance, parse trees + suffix trees can combine the detection capability of parse trees with the speed afforded by suffix trees, a type of string-matching data structure.

The previous classification was developed for code refactoring, and not for academic plagiarism detection (an important goal of refactoring is to avoid duplicate code, referred to as code clones in the literature). The above approaches are effective against different levels of similarity; low-level similarity refers to identical text, while high-level similarity can be due to similar specifications. In an academic setting, when all students are expected to code to the same specifications, functionally equivalent code (with high-level similarity) is entirely expected, and only low-level similarity is considered as proof of cheating.

Academic program plagiarism systems

MOSS and JPlag can be used free of charge, but both require registration and the software remains proprietary. Personal systems are normal desktop applications, and most of them are both free of charge and released as open-source software.

Literature

  • Carrol, J. (2002). A handbook for detecting plagiarism in higher education. Oxford: The Oxford Centre for Staff and Learning Development, Oxford Brookes University. (96 p.).

See also

References

External links