Wikimedia-l,
Hello. I am pleased to share four quick ideas for improving Wikipedia with AI 
systems.
Firstly, AI systems could answer questions about potential edits by processing, 
comparing, and answering questions about two revisions of an article. These 
questions might involve whether content, e.g., potential edits, had a neutral 
point-of-view or otherwise conformed with Wikipedia’s content guidelines.
Wikipedia develops at a rate of about two edits per second [1]. Accordingly, 
administrators and moderators should be able to indicate and to select which 
articles to have their potential revisions or edits processed by one or more 
AI-powered content-processing pipelines. Administrators and moderators could 
receive numerically weighted or otherwise prioritized messages or alerts from 
AI-powered content-processing pipelines to review potential edits and edited 
articles.
Secondly, AI systems could assist in moderating talk pages or discussion forums.
Thirdly, new wiki templates could be useful for obtaining and inserting content 
from interoperating AI systems. Computationally, such templates could be 
processed either upon: (1) each page view, (2) users pressing a button on such 
pages, (3) each article edit or revision, (4) each hour, day, week, month, 
year, or other interval of time, (5) whenever the interoperating AI system were 
upgraded or switched, (6) combinations of these or other factors.
With respect to specifying desired formatting to LLM's with respect to their 
outputs, one can observe the Guidance project [2].
Interestingly, resultant HTML markup could be semantically distinguished as 
being AI-generated.
<p>The current temperature in Seattle, WA, is <span class="ai">42</span>.</p>
Fourthly, if AI-powered templates are to be explored and supported, related 
user experiences could be considered. These might involve Wikipedia users being 
able to visually detect, easily annotate, and, potentially, correct such 
content.
Content generated by AI via templates could be visually styled with a symbol or 
a glyph following it, resembling how external hyperlinks are followed by small 
visual symbols. Hovering over these symbols or glyphs could visually highlight 
the entireties of the contents that were AI-generated. In theory, users could 
(right-)click upon such symbols or glyphs (or upon AI-generated content itself) 
to view a special context menu. This context menu could provide means for 
indicating whether the AI-generated content was correct or whether it was 
incorrect. There could be a textbox on the context menu for users to easily 
enter corrections into. Users' feedback with respect to AI-generated content 
could be sent to interoperating AI systems.

Best regards,
Adam Sobieski
[1] https://en.wikipedia.org/wiki/Wikipedia:Statistics
[2] https://github.com/guidance-ai/guidance

_______________________________________________
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/O42TGILON54RS6CLIR44PEWUXC7MK22I/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

Reply via email to