How hard would it be to set up a tool like the software that as far as I know the MIT uses to automatically check plagiarism among thesis etc. submitted to their digital library, checking the text of all Wikimedia projects against e.g. newspaper websites and Google Books, and then publishing the results in some visually appealing way to show how much newspapers copy from Wikipedia and each other? On we regularly see complaints and unhappy discussions about newspaper articles which are just a copy and paste from Wikipedia and still feature a "COPY RESERVED" warning without citing any source... newspapers are by definition arrogant, so nothing can be done to stop them, but an informative tool would be useful and might be as effective as wikiscanner was with regard to IP editing from organizations.


James Heilman, 18/10/2012 07:26:
We really need a plagiarism detection tool so that we can make sure our
sources are not simply "copy and pastes" of older versions of Wikipedia.
Today I was happily improving our article on pneumonia as I have a day off.
I came across a recommendation that baby's should be suction at birth to
decrease their risk of pneumonia with a {{cn}} tag. So I went to Google
books and up came a book that supported it perfectly. And than I noticed
that this book supported the previous and next few sentences as well. It
also supported a number of other sections we had in the article but was
missing our references. The book was selling for $340 a copy. Our articles
have improved a great deal since 2007 and yet school are buying copy edited
version of Wikipedia from 5 years ago. The bit about suctioning babies at
birth is was wrong and I have corrected it. I think we need to get this
news out. Support Wikipedia and use the latest version online!

Further details / discuss are here

Wikimedia-l mailing list

Reply via email to