[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Denny Vrandečić
I think Jimmy's proposal is spot on.

A generative AI is a tool, and whoever makes the edit is fully responsible
for the edit, no matter whether the text was written by the person or with
the help of a generative tool. This has the potential to open us for people
who are not good at formulating, or who are not confident about their
writing. As long as they completely take responsibility for the written
text, all is fine.

This is similar to the approach the ACM has taken for AI generated text.
They decided that a generative model cannot be a co-author as it lacks the
ability to be morally responsible for the text. Second, anything that you
publish under your name is your responsibility.


On Wed, May 17, 2023 at 11:11 AM Todd Allen  wrote:

> Though, this does run the risk of encouraging people to take the
> "backwards" approach to writing an article--writing some stuff, and then
> (hopefully at least) trying to come up with sources for it.
>
> The much superior approach is to locate the available sources first, and
> then to develop the article based upon what those sources say.
>
> Todd
>
> On Wed, May 17, 2023 at 12:06 PM Samuel Klein  wrote:
>
>>
>> First: Wikipedia style for dense inline citations is one of the most
>> granular and articulate around, so we're pushing the boundaries in some
>> cases of research norms for clarity in sourcing.  That's great; also means
>> sometimes we are considering nuances that may be new.
>>
>> Second: We're approaching a topic close to my heart, which is
>> distinguishing reference-sources from process-sources.  Right now we often
>> capture process sources (for an edit) in the edit summary, and this is not
>> visible anywhere on the resulting article.  Translations via a translate
>> tool; updates by a script that does a particular class of work (like
>> spelling or grammer checking); applying a detailed diff that was
>> workshopped on some other page.  An even better interface might allow for
>> that detail to be visible to readers of the article [w/o traversing the
>> edit history], and linked to the sections/paragraphs/sentences affected.
>>
>> I think any generative tools used to rewrite a section or article, or to
>> produce a sibling version for a different reading-level, or to generate a
>> timeline or other visualization that is then embedded in the article,
>> should all be cited somehow.  To Jimbo's point, that doesn't belong in a
>> References section as we currently have them.  But I'd like to see us
>> develop a way to capture these process notes in a more legible way, so
>> readers can discover them without browsing the revision history.
>>
>> People using generative tools to draft new material should find reliable
>> sources for every claim in that material, much more densely than you would
>> when summarizing a series of sources yourself.
>> However, as we approach models that can discover sources and check facts,
>> a combination of those with current generative tools could produce things
>> closer to what we'd consider acceptable drafts, and at scale could generate
>> reference works in languages that lack them.  I suggest a separate project
>> for those as the best way to explore the implications of being able to do
>> this at scale, and should capture the full model/tuning/prompt details of
>> how each edit was generated.  Such an automatically-updated resource would
>> not be a good reliable source, just as we avoid citing any tertiary
>> sources, but could be a research tool for WP editors and modelers alike.
>>
>> SJ
>>
>> On Wed, May 17, 2023 at 9:27 AM Jimmy Wales 
>> wrote:
>>
>>> One way I think we can approach this is to think of it as being the
>>> latest in this progression:
>>>
>>> spellchecker -> grammar checker -> text generation support
>>>
>>> We wouldn't have any sort of footnote or indication of any kind that a
>>> spellchecker or grammar checker was
>>> used by an editor, it's just built-in to many writing tools.  Similarly,
>>> if writing a short prompt to generate a longer
>>> text is used, then we have no reason to cite that.
>>>
>>> What we do have, though, is a responsibility to check the output.
>>> Spellcheckers can be wrong (suggesting the correct
>>> spelling of the wrong word for example).  Grammar checkers can be wrong
>>> (trying to correct the grammar of a direct quote
>>> for example).  Generative AI models can be wrong - often simply making
>>> things up out of thin air that sound plausible.
>>>
>>> If someone uses a generative AI to help them write some text, that's not
>>> a big deal.  If they upload text without checking
>>> the facts and citing a real source, that's very bad.
>>>
>>>
>>> On 2023-05-17 11:51, The Cunctator wrote:
>>>
>>> Again at no point should even an improved version be considered a
>>> source; at best it would be a research or editing tool.
>>>
>>> On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:
>>>
 Keep in mind how fast these tools change. ChatGPT, Bard and
 competitors 

[Wikimedia-l] Re: [Wikimediaindia-l] Request for Transparency Regarding WMF Staff in India

2023-05-17 Thread effe iets anders
Hi Jayantilal,

as I'm reading this, mostly as an outsider from a different community who
has great appreciation for everything that has been accomplished in various
communities in India, I can feel the tension. This is unfortunately nothing
new, and it's an ongoing balance that has to be struck between helping out
as a staff member on one hand, and not undercutting the community to
organize itself on the other. This is a challenge, even with the best of
intentions. This only gets harder when there is a (real or perceived)
struggle for influence/power. I have come to understand that India is an
even more complex situation, due to the influence of CIS.

I noticed that the WCI organizers have put forward
https://meta.wikimedia.org/wiki/WikiConference_India_2023/Discussions_and_Feedback
in response to this thread. It might be beneficial for the conversation to
try and follow up with this attempt to conversation, and see how far you
get with regards to the transparency that you seek. If you can achieve this
without the intervention of Maryana, this is probably more advantageous for
everyone involved. That does not mean you have to agree with what is
desirable, but at least you would be able to work from a common base of
facts/information.

If there are aspects on that page that are possibly misleading from your
point of view, or simply information that is missing, you might be
interested in bringing this up on the talkpage.

Just a thought,

Lodewijk

On Wed, May 17, 2023 at 2:04 PM Jayantilal Kothari 
wrote:

> Dear Maryana,
>
> Your Listening Tour has been a commendable initiative to understand the
> voice of the community. However, the essence of listening lies in its
> responsiveness.
>
> Over ten days ago, we raised concerns that unfortunately remain
> unaddressed. This isn't a single person sentiment but a collective voice
> from the Indian Wikimedia community, a voice that has grown stronger since
> the recent Wiki Conference India.
>
> We're concerned about the WMF India staff's involvement in community-led
> events, notably the Wiki Conference India. While their participation is
> welcome, there's a growing perception of encroachment on community-led
> initiatives. The community's autonomy is being compromised, and several
> experienced community members have voiced this concern on the public
> mailing list.
>
> Furthermore, we've observed that the WMF India Staff is assisting
> community members in crafting emails and guiding them on how to handle this
> mailing list situation. As a community, we believe in the ability of our
> members to speak for themselves. Currently, it appears that only those on
> the WMF payroll—either through grant salary/contract or through WMF-funded
> CIS salary—are speaking on behalf of Wiki Conference India, seemingly under
> the guidance of WMF Staff India. We urge WMF India Staff to step back and
> allow the community to voice their concerns independently. Check the Wiki
> Conference India Team on Meta. Most of them are drawing salaries from WMF
> or CIS or have been previous employees in the last 5 years. Very few are
> people who have always been volunteers. Many of them have also not written
> WMF against their name because they say they did this conference as
> volunteers. Was there no volunteer to come forward and organize? This means
> WMF staff have not been able to grow the community.
>
> We wish to understand the roles, responsibilities, and contributions of
> the WMF India staff who actively participated in the Wiki Conference India.
> Being paid by funds raised through volunteer-built platforms like
> Wikipedia, their active participation in community spaces calls for higher
> accountability.
>
> WMF has spent so much money on Strategy 2030 but the India Conference had
> no session on it why? India is not important or what?  Sunday there was a
> session on Strategy 2030 but it was removed without telling participants.
> Why?
>
> We would like to clarify that this is not a request for personal
> information—since the identities of these staff members are already
> publicly known—but a call for professional transparency, as we seek to
> understand their specific roles and contributions. If these staff members
> were comfortable taking to the stage and receiving credit at the
> conference, they should be equally comfortable sharing the scope and impact
> of their work with the community that they serve. Their willingness to be
> in the public eye during the conference should extend to their professional
> commitments and achievements. We're keen to know about the partnerships
> they've formed over the past few years that have benefited Indian
> communities, the initiatives the communications team has launched beyond
> financial incentives for Instagram users, the community projects undertaken
> by other staff members, negative response on fundraising and the hiring
> practices aimed at empowering local user groups.
>
> Considering the nature of these questions, 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Todd Allen
Though, this does run the risk of encouraging people to take the
"backwards" approach to writing an article--writing some stuff, and then
(hopefully at least) trying to come up with sources for it.

The much superior approach is to locate the available sources first, and
then to develop the article based upon what those sources say.

Todd

On Wed, May 17, 2023 at 12:06 PM Samuel Klein  wrote:

>
> First: Wikipedia style for dense inline citations is one of the most
> granular and articulate around, so we're pushing the boundaries in some
> cases of research norms for clarity in sourcing.  That's great; also means
> sometimes we are considering nuances that may be new.
>
> Second: We're approaching a topic close to my heart, which is
> distinguishing reference-sources from process-sources.  Right now we often
> capture process sources (for an edit) in the edit summary, and this is not
> visible anywhere on the resulting article.  Translations via a translate
> tool; updates by a script that does a particular class of work (like
> spelling or grammer checking); applying a detailed diff that was
> workshopped on some other page.  An even better interface might allow for
> that detail to be visible to readers of the article [w/o traversing the
> edit history], and linked to the sections/paragraphs/sentences affected.
>
> I think any generative tools used to rewrite a section or article, or to
> produce a sibling version for a different reading-level, or to generate a
> timeline or other visualization that is then embedded in the article,
> should all be cited somehow.  To Jimbo's point, that doesn't belong in a
> References section as we currently have them.  But I'd like to see us
> develop a way to capture these process notes in a more legible way, so
> readers can discover them without browsing the revision history.
>
> People using generative tools to draft new material should find reliable
> sources for every claim in that material, much more densely than you would
> when summarizing a series of sources yourself.
> However, as we approach models that can discover sources and check facts,
> a combination of those with current generative tools could produce things
> closer to what we'd consider acceptable drafts, and at scale could generate
> reference works in languages that lack them.  I suggest a separate project
> for those as the best way to explore the implications of being able to do
> this at scale, and should capture the full model/tuning/prompt details of
> how each edit was generated.  Such an automatically-updated resource would
> not be a good reliable source, just as we avoid citing any tertiary
> sources, but could be a research tool for WP editors and modelers alike.
>
> SJ
>
> On Wed, May 17, 2023 at 9:27 AM Jimmy Wales 
> wrote:
>
>> One way I think we can approach this is to think of it as being the
>> latest in this progression:
>>
>> spellchecker -> grammar checker -> text generation support
>>
>> We wouldn't have any sort of footnote or indication of any kind that a
>> spellchecker or grammar checker was
>> used by an editor, it's just built-in to many writing tools.  Similarly,
>> if writing a short prompt to generate a longer
>> text is used, then we have no reason to cite that.
>>
>> What we do have, though, is a responsibility to check the output.
>> Spellcheckers can be wrong (suggesting the correct
>> spelling of the wrong word for example).  Grammar checkers can be wrong
>> (trying to correct the grammar of a direct quote
>> for example).  Generative AI models can be wrong - often simply making
>> things up out of thin air that sound plausible.
>>
>> If someone uses a generative AI to help them write some text, that's not
>> a big deal.  If they upload text without checking
>> the facts and citing a real source, that's very bad.
>>
>>
>> On 2023-05-17 11:51, The Cunctator wrote:
>>
>> Again at no point should even an improved version be considered a source;
>> at best it would be a research or editing tool.
>>
>> On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:
>>
>>> Keep in mind how fast these tools change. ChatGPT, Bard and
>>> competitors understand well the issues with lack of sources, and Bard
>>> does sometimes put a suitable source in a footnote, even if it
>>> (somewhat disappointingly) just links to wikipedia. There's likely to
>>> be a variation soon that does a decent job of providing references,
>>> and at that point the role of these tools moves beyond being an
>>> amusement to a far more credible research tool.
>>>
>>> So, these long discussions about impact on open knowledge are quite
>>> likely to have to run again in 2024...
>>>
>>> On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
>>>  wrote:
>>> >
>>> > Thank you everyone for your input.
>>> >
>>> > Your considerations are very similar to mine, and they give a clear
>>> direction towards what the guidelines regarding the use of ChatGPT should
>>> point to.
>>> >
>>> > Best regards,
>>> > Kiril
>>> >
>>> > On 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Samuel Klein
First: Wikipedia style for dense inline citations is one of the most
granular and articulate around, so we're pushing the boundaries in some
cases of research norms for clarity in sourcing.  That's great; also means
sometimes we are considering nuances that may be new.

Second: We're approaching a topic close to my heart, which is
distinguishing reference-sources from process-sources.  Right now we often
capture process sources (for an edit) in the edit summary, and this is not
visible anywhere on the resulting article.  Translations via a translate
tool; updates by a script that does a particular class of work (like
spelling or grammer checking); applying a detailed diff that was
workshopped on some other page.  An even better interface might allow for
that detail to be visible to readers of the article [w/o traversing the
edit history], and linked to the sections/paragraphs/sentences affected.

I think any generative tools used to rewrite a section or article, or to
produce a sibling version for a different reading-level, or to generate a
timeline or other visualization that is then embedded in the article,
should all be cited somehow.  To Jimbo's point, that doesn't belong in a
References section as we currently have them.  But I'd like to see us
develop a way to capture these process notes in a more legible way, so
readers can discover them without browsing the revision history.

People using generative tools to draft new material should find reliable
sources for every claim in that material, much more densely than you would
when summarizing a series of sources yourself.
However, as we approach models that can discover sources and check facts, a
combination of those with current generative tools could produce things
closer to what we'd consider acceptable drafts, and at scale could generate
reference works in languages that lack them.  I suggest a separate project
for those as the best way to explore the implications of being able to do
this at scale, and should capture the full model/tuning/prompt details of
how each edit was generated.  Such an automatically-updated resource would
not be a good reliable source, just as we avoid citing any tertiary
sources, but could be a research tool for WP editors and modelers alike.

SJ

On Wed, May 17, 2023 at 9:27 AM Jimmy Wales 
wrote:

> One way I think we can approach this is to think of it as being the latest
> in this progression:
>
> spellchecker -> grammar checker -> text generation support
>
> We wouldn't have any sort of footnote or indication of any kind that a
> spellchecker or grammar checker was
> used by an editor, it's just built-in to many writing tools.  Similarly,
> if writing a short prompt to generate a longer
> text is used, then we have no reason to cite that.
>
> What we do have, though, is a responsibility to check the output.
> Spellcheckers can be wrong (suggesting the correct
> spelling of the wrong word for example).  Grammar checkers can be wrong
> (trying to correct the grammar of a direct quote
> for example).  Generative AI models can be wrong - often simply making
> things up out of thin air that sound plausible.
>
> If someone uses a generative AI to help them write some text, that's not a
> big deal.  If they upload text without checking
> the facts and citing a real source, that's very bad.
>
>
> On 2023-05-17 11:51, The Cunctator wrote:
>
> Again at no point should even an improved version be considered a source;
> at best it would be a research or editing tool.
>
> On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:
>
>> Keep in mind how fast these tools change. ChatGPT, Bard and
>> competitors understand well the issues with lack of sources, and Bard
>> does sometimes put a suitable source in a footnote, even if it
>> (somewhat disappointingly) just links to wikipedia. There's likely to
>> be a variation soon that does a decent job of providing references,
>> and at that point the role of these tools moves beyond being an
>> amusement to a far more credible research tool.
>>
>> So, these long discussions about impact on open knowledge are quite
>> likely to have to run again in 2024...
>>
>> On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
>>  wrote:
>> >
>> > Thank you everyone for your input.
>> >
>> > Your considerations are very similar to mine, and they give a clear
>> direction towards what the guidelines regarding the use of ChatGPT should
>> point to.
>> >
>> > Best regards,
>> > Kiril
>> >
>> > On Wed, 17 May 2023 at 10:11, Ilario valdelli 
>> wrote:
>> >>
>> >> Define "reliable source".
>> >>
>> >> A source is reliable if can be consulted by other people than the
>> editor
>> >> to check the content.
>> >>
>> >> Is this possible with ChatGPT? No, becaue if you address the same
>> >> question to CHatGPT, you will have a different answer.
>> >>
>> >> In this case how the people verificaying the information can check that
>> >> the editor did not invent the result?
>> >>
>> >> Kind regards
>> >>
>> >> On 17/05/2023 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Jimmy Wales
One way I think we can approach this is to think of it as being the 
latest in this progression:


spellchecker -> grammar checker -> text generation support

We wouldn't have any sort of footnote or indication of any kind that a 
spellchecker or grammar checker was
used by an editor, it's just built-in to many writing tools. Similarly, 
if writing a short prompt to generate a longer

text is used, then we have no reason to cite that.

What we do have, though, is a responsibility to check the output.  
Spellcheckers can be wrong (suggesting the correct
spelling of the wrong word for example).  Grammar checkers can be wrong 
(trying to correct the grammar of a direct quote
for example).  Generative AI models can be wrong - often simply making 
things up out of thin air that sound plausible.


If someone uses a generative AI to help them write some text, that's not 
a big deal.  If they upload text without checking

the facts and citing a real source, that's very bad.


On 2023-05-17 11:51, The Cunctator wrote:
Again at no point should even an improved version be considered a 
source; at best it would be a research or editing tool.


On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:

Keep in mind how fast these tools change. ChatGPT, Bard and
competitors understand well the issues with lack of sources, and Bard
does sometimes put a suitable source in a footnote, even if it
(somewhat disappointingly) just links to wikipedia. There's likely to
be a variation soon that does a decent job of providing references,
and at that point the role of these tools moves beyond being an
amusement to a far more credible research tool.

So, these long discussions about impact on open knowledge are quite
likely to have to run again in 2024...

On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
 wrote:
>
> Thank you everyone for your input.
>
> Your considerations are very similar to mine, and they give a
clear direction towards what the guidelines regarding the use of
ChatGPT should point to.
>
> Best regards,
> Kiril
>
> On Wed, 17 May 2023 at 10:11, Ilario valdelli
 wrote:
>>
>> Define "reliable source".
>>
>> A source is reliable if can be consulted by other people than
the editor
>> to check the content.
>>
>> Is this possible with ChatGPT? No, becaue if you address the same
>> question to CHatGPT, you will have a different answer.
>>
>> In this case how the people verificaying the information can
check that
>> the editor did not invent the result?
>>
>> Kind regards
>>
>> On 17/05/2023 09:08, Kiril Simeonovski wrote:
>> > Dear Wikimedians,
>> >
>> > Two days ago, a participant in one of our edit-a-thons consulted
>> > ChatGPT when writing an article on the Macedonian Wikipedia
that did
>> > not exist on any other language edition. ChatGPT provided
some output,
>> > but the problem was how to cite it.
>> >
>> > The community on the Macedonian Wikipedia has not yet had a
discussion
>> > on this matter and we do not have any guidelines. So, my main
>> > questions are the following:
>> >
>> > * Can ChatGPT be used as a reliable source and, if yes, how
would the
>> > citation look like?
>> >
>> > * Are there any ongoing community discussions on introducing
guidelines?
>> >
>> > My personal opinion is that ChatGPT should be avoided as a
reliable
>> > source, and only the original source where the algorithm gets the
>> > information from should be used.
>> >
>> > Best regards,
>> > Kiril
>> >
>> > ___
>> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
>> > Public archives at

https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
>> > To unsubscribe send an email to
wikimedia-l-le...@lists.wikimedia.org
>>
>> --
>> Ilario Valdelli
>> Wikimedia CH
>> Verein zur Förderung Freien Wissens
>> Association pour l’avancement des connaissances libre
>> Associazione per il sostegno alla conoscenza libera
>> Switzerland - 8008 Zürich
>> Wikipedia: Ilario
>> Skype: valdelli
>> Tel: +41764821371
>> http://www.wikimedia.ch
>>
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at

https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
> To unsubscribe send an 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Paulo Santos Perneta
It's quite interesting how these models ended up being so illiterate and
dumb on source reading and interpretation, while so creative and plausible
at the same time.
I'm sure there's a reason for this, can somebody please point to a link to
a place where this is discussed, if you know it?

Thanks,
Paulo

David Gerard  escreveu no dia quarta, 17/05/2023 à(s)
13:12:

> Note that quite often it just *makes up* a plausible-looking source.
> Because AI text generators just make up plausible text, not accurate
> text.
>
> On Wed, 17 May 2023 at 09:40, Lane Chance  wrote:
> >
> > Keep in mind how fast these tools change. ChatGPT, Bard and
> > competitors understand well the issues with lack of sources, and Bard
> > does sometimes put a suitable source in a footnote, even if it
> > (somewhat disappointingly) just links to wikipedia. There's likely to
> > be a variation soon that does a decent job of providing references,
> > and at that point the role of these tools moves beyond being an
> > amusement to a far more credible research tool.
> >
> > So, these long discussions about impact on open knowledge are quite
> > likely to have to run again in 2024...
> >
> > On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
> >  wrote:
> > >
> > > Thank you everyone for your input.
> > >
> > > Your considerations are very similar to mine, and they give a clear
> direction towards what the guidelines regarding the use of ChatGPT should
> point to.
> > >
> > > Best regards,
> > > Kiril
> > >
> > > On Wed, 17 May 2023 at 10:11, Ilario valdelli 
> wrote:
> > >>
> > >> Define "reliable source".
> > >>
> > >> A source is reliable if can be consulted by other people than the
> editor
> > >> to check the content.
> > >>
> > >> Is this possible with ChatGPT? No, becaue if you address the same
> > >> question to CHatGPT, you will have a different answer.
> > >>
> > >> In this case how the people verificaying the information can check
> that
> > >> the editor did not invent the result?
> > >>
> > >> Kind regards
> > >>
> > >> On 17/05/2023 09:08, Kiril Simeonovski wrote:
> > >> > Dear Wikimedians,
> > >> >
> > >> > Two days ago, a participant in one of our edit-a-thons consulted
> > >> > ChatGPT when writing an article on the Macedonian Wikipedia that did
> > >> > not exist on any other language edition. ChatGPT provided some
> output,
> > >> > but the problem was how to cite it.
> > >> >
> > >> > The community on the Macedonian Wikipedia has not yet had a
> discussion
> > >> > on this matter and we do not have any guidelines. So, my main
> > >> > questions are the following:
> > >> >
> > >> > * Can ChatGPT be used as a reliable source and, if yes, how would
> the
> > >> > citation look like?
> > >> >
> > >> > * Are there any ongoing community discussions on introducing
> guidelines?
> > >> >
> > >> > My personal opinion is that ChatGPT should be avoided as a reliable
> > >> > source, and only the original source where the algorithm gets the
> > >> > information from should be used.
> > >> >
> > >> > Best regards,
> > >> > Kiril
> > >> >
> > >> > ___
> > >> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
> guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> and https://meta.wikimedia.org/wiki/Wikimedia-l
> > >> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> > >> > To unsubscribe send an email to
> wikimedia-l-le...@lists.wikimedia.org
> > >>
> > >> --
> > >> Ilario Valdelli
> > >> Wikimedia CH
> > >> Verein zur Förderung Freien Wissens
> > >> Association pour l’avancement des connaissances libre
> > >> Associazione per il sostegno alla conoscenza libera
> > >> Switzerland - 8008 Zürich
> > >> Wikipedia: Ilario
> > >> Skype: valdelli
> > >> Tel: +41764821371
> > >> http://www.wikimedia.ch
> > >>
> > > ___
> > > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
> guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> and https://meta.wikimedia.org/wiki/Wikimedia-l
> > > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
> > > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> > ___
> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/DNOFFTF2DECPFETILCWBOVT5AD63R3UH/
> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Johan Jönsson
As has been pointed out above, we have the hallucination issues, because
AIs/LLMs deal in language and how probable a phrase seems to be, rather
than in facts. Beyond the hallucination issues, we have the fact that their
answers can't be accessed by other editors. Beyond the fact that their
answers aren't published sources, even in a scenario where they reliably
could present information, they would relay what had been found elsewhere.

But is this so different from what we're used to? If I want to use
information from a Wikipedia article elsewhere in the encyclopedia, I don't
cite said article; I go to the sources. If I can't figure out where the
information is coming from, I don't use it.

If you can see where the information is coming from (not possible
specifically in the normal ChatGPT experience at this time, as this
requires that the tool is used to retrieve information from a specific
place rather than find probable phrases), read that and cite it, if it's a
reliable source. If you can't see where the information is coming from, it
can't be used.

//Johan Jönsson
--

Den ons 17 maj 2023 kl 12:52 skrev The Cunctator :

> Again at no point should even an improved version be considered a source;
> at best it would be a research or editing tool.
>
> On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:
>
>> Keep in mind how fast these tools change. ChatGPT, Bard and
>> competitors understand well the issues with lack of sources, and Bard
>> does sometimes put a suitable source in a footnote, even if it
>> (somewhat disappointingly) just links to wikipedia. There's likely to
>> be a variation soon that does a decent job of providing references,
>> and at that point the role of these tools moves beyond being an
>> amusement to a far more credible research tool.
>>
>> So, these long discussions about impact on open knowledge are quite
>> likely to have to run again in 2024...
>>
>> On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
>>  wrote:
>> >
>> > Thank you everyone for your input.
>> >
>> > Your considerations are very similar to mine, and they give a clear
>> direction towards what the guidelines regarding the use of ChatGPT should
>> point to.
>> >
>> > Best regards,
>> > Kiril
>> >
>> > On Wed, 17 May 2023 at 10:11, Ilario valdelli 
>> wrote:
>> >>
>> >> Define "reliable source".
>> >>
>> >> A source is reliable if can be consulted by other people than the
>> editor
>> >> to check the content.
>> >>
>> >> Is this possible with ChatGPT? No, becaue if you address the same
>> >> question to CHatGPT, you will have a different answer.
>> >>
>> >> In this case how the people verificaying the information can check that
>> >> the editor did not invent the result?
>> >>
>> >> Kind regards
>> >>
>> >> On 17/05/2023 09:08, Kiril Simeonovski wrote:
>> >> > Dear Wikimedians,
>> >> >
>> >> > Two days ago, a participant in one of our edit-a-thons consulted
>> >> > ChatGPT when writing an article on the Macedonian Wikipedia that did
>> >> > not exist on any other language edition. ChatGPT provided some
>> output,
>> >> > but the problem was how to cite it.
>> >> >
>> >> > The community on the Macedonian Wikipedia has not yet had a
>> discussion
>> >> > on this matter and we do not have any guidelines. So, my main
>> >> > questions are the following:
>> >> >
>> >> > * Can ChatGPT be used as a reliable source and, if yes, how would the
>> >> > citation look like?
>> >> >
>> >> > * Are there any ongoing community discussions on introducing
>> guidelines?
>> >> >
>> >> > My personal opinion is that ChatGPT should be avoided as a reliable
>> >> > source, and only the original source where the algorithm gets the
>> >> > information from should be used.
>> >> >
>> >> > Best regards,
>> >> > Kiril
>> >> >
>> >> > ___
>> >> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
>> guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
>> and https://meta.wikimedia.org/wiki/Wikimedia-l
>> >> > Public archives at
>> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
>> >> > To unsubscribe send an email to
>> wikimedia-l-le...@lists.wikimedia.org
>> >>
>> >> --
>> >> Ilario Valdelli
>> >> Wikimedia CH
>> >> Verein zur Förderung Freien Wissens
>> >> Association pour l’avancement des connaissances libre
>> >> Associazione per il sostegno alla conoscenza libera
>> >> Switzerland - 8008 Zürich
>> >> Wikipedia: Ilario
>> >> Skype: valdelli
>> >> Tel: +41764821371
>> >> http://www.wikimedia.ch
>> >>
>> > ___
>> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
>> guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
>> and https://meta.wikimedia.org/wiki/Wikimedia-l
>> > Public archives at
>> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
>> > To unsubscribe 

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread David Gerard
Note that quite often it just *makes up* a plausible-looking source.
Because AI text generators just make up plausible text, not accurate
text.

On Wed, 17 May 2023 at 09:40, Lane Chance  wrote:
>
> Keep in mind how fast these tools change. ChatGPT, Bard and
> competitors understand well the issues with lack of sources, and Bard
> does sometimes put a suitable source in a footnote, even if it
> (somewhat disappointingly) just links to wikipedia. There's likely to
> be a variation soon that does a decent job of providing references,
> and at that point the role of these tools moves beyond being an
> amusement to a far more credible research tool.
>
> So, these long discussions about impact on open knowledge are quite
> likely to have to run again in 2024...
>
> On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
>  wrote:
> >
> > Thank you everyone for your input.
> >
> > Your considerations are very similar to mine, and they give a clear 
> > direction towards what the guidelines regarding the use of ChatGPT should 
> > point to.
> >
> > Best regards,
> > Kiril
> >
> > On Wed, 17 May 2023 at 10:11, Ilario valdelli  wrote:
> >>
> >> Define "reliable source".
> >>
> >> A source is reliable if can be consulted by other people than the editor
> >> to check the content.
> >>
> >> Is this possible with ChatGPT? No, becaue if you address the same
> >> question to CHatGPT, you will have a different answer.
> >>
> >> In this case how the people verificaying the information can check that
> >> the editor did not invent the result?
> >>
> >> Kind regards
> >>
> >> On 17/05/2023 09:08, Kiril Simeonovski wrote:
> >> > Dear Wikimedians,
> >> >
> >> > Two days ago, a participant in one of our edit-a-thons consulted
> >> > ChatGPT when writing an article on the Macedonian Wikipedia that did
> >> > not exist on any other language edition. ChatGPT provided some output,
> >> > but the problem was how to cite it.
> >> >
> >> > The community on the Macedonian Wikipedia has not yet had a discussion
> >> > on this matter and we do not have any guidelines. So, my main
> >> > questions are the following:
> >> >
> >> > * Can ChatGPT be used as a reliable source and, if yes, how would the
> >> > citation look like?
> >> >
> >> > * Are there any ongoing community discussions on introducing guidelines?
> >> >
> >> > My personal opinion is that ChatGPT should be avoided as a reliable
> >> > source, and only the original source where the algorithm gets the
> >> > information from should be used.
> >> >
> >> > Best regards,
> >> > Kiril
> >> >
> >> > ___
> >> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines 
> >> > at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
> >> > https://meta.wikimedia.org/wiki/Wikimedia-l
> >> > Public archives at 
> >> > https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> >> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> >>
> >> --
> >> Ilario Valdelli
> >> Wikimedia CH
> >> Verein zur Förderung Freien Wissens
> >> Association pour l’avancement des connaissances libre
> >> Associazione per il sostegno alla conoscenza libera
> >> Switzerland - 8008 Zürich
> >> Wikipedia: Ilario
> >> Skype: valdelli
> >> Tel: +41764821371
> >> http://www.wikimedia.ch
> >>
> > ___
> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
> > https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
> > https://meta.wikimedia.org/wiki/Wikimedia-l
> > Public archives at 
> > https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at 
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/DNOFFTF2DECPFETILCWBOVT5AD63R3UH/
> To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/O75PUVCXLCVELZW7VMQY4KR65M6I5BYK/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread The Cunctator
Again at no point should even an improved version be considered a source;
at best it would be a research or editing tool.

On Wed, May 17, 2023, 4:40 AM Lane Chance  wrote:

> Keep in mind how fast these tools change. ChatGPT, Bard and
> competitors understand well the issues with lack of sources, and Bard
> does sometimes put a suitable source in a footnote, even if it
> (somewhat disappointingly) just links to wikipedia. There's likely to
> be a variation soon that does a decent job of providing references,
> and at that point the role of these tools moves beyond being an
> amusement to a far more credible research tool.
>
> So, these long discussions about impact on open knowledge are quite
> likely to have to run again in 2024...
>
> On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
>  wrote:
> >
> > Thank you everyone for your input.
> >
> > Your considerations are very similar to mine, and they give a clear
> direction towards what the guidelines regarding the use of ChatGPT should
> point to.
> >
> > Best regards,
> > Kiril
> >
> > On Wed, 17 May 2023 at 10:11, Ilario valdelli 
> wrote:
> >>
> >> Define "reliable source".
> >>
> >> A source is reliable if can be consulted by other people than the editor
> >> to check the content.
> >>
> >> Is this possible with ChatGPT? No, becaue if you address the same
> >> question to CHatGPT, you will have a different answer.
> >>
> >> In this case how the people verificaying the information can check that
> >> the editor did not invent the result?
> >>
> >> Kind regards
> >>
> >> On 17/05/2023 09:08, Kiril Simeonovski wrote:
> >> > Dear Wikimedians,
> >> >
> >> > Two days ago, a participant in one of our edit-a-thons consulted
> >> > ChatGPT when writing an article on the Macedonian Wikipedia that did
> >> > not exist on any other language edition. ChatGPT provided some output,
> >> > but the problem was how to cite it.
> >> >
> >> > The community on the Macedonian Wikipedia has not yet had a discussion
> >> > on this matter and we do not have any guidelines. So, my main
> >> > questions are the following:
> >> >
> >> > * Can ChatGPT be used as a reliable source and, if yes, how would the
> >> > citation look like?
> >> >
> >> > * Are there any ongoing community discussions on introducing
> guidelines?
> >> >
> >> > My personal opinion is that ChatGPT should be avoided as a reliable
> >> > source, and only the original source where the algorithm gets the
> >> > information from should be used.
> >> >
> >> > Best regards,
> >> > Kiril
> >> >
> >> > ___
> >> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org,
> guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> and https://meta.wikimedia.org/wiki/Wikimedia-l
> >> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> >> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> >>
> >> --
> >> Ilario Valdelli
> >> Wikimedia CH
> >> Verein zur Förderung Freien Wissens
> >> Association pour l’avancement des connaissances libre
> >> Associazione per il sostegno alla conoscenza libera
> >> Switzerland - 8008 Zürich
> >> Wikipedia: Ilario
> >> Skype: valdelli
> >> Tel: +41764821371
> >> http://www.wikimedia.ch
> >>
> > ___
> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/DNOFFTF2DECPFETILCWBOVT5AD63R3UH/
> To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/LNUPXC7OM56CNFNE3JHHSYR7KTNBCLIM/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Lane Chance
Keep in mind how fast these tools change. ChatGPT, Bard and
competitors understand well the issues with lack of sources, and Bard
does sometimes put a suitable source in a footnote, even if it
(somewhat disappointingly) just links to wikipedia. There's likely to
be a variation soon that does a decent job of providing references,
and at that point the role of these tools moves beyond being an
amusement to a far more credible research tool.

So, these long discussions about impact on open knowledge are quite
likely to have to run again in 2024...

On Wed, 17 May 2023 at 09:24, Kiril Simeonovski
 wrote:
>
> Thank you everyone for your input.
>
> Your considerations are very similar to mine, and they give a clear direction 
> towards what the guidelines regarding the use of ChatGPT should point to.
>
> Best regards,
> Kiril
>
> On Wed, 17 May 2023 at 10:11, Ilario valdelli  wrote:
>>
>> Define "reliable source".
>>
>> A source is reliable if can be consulted by other people than the editor
>> to check the content.
>>
>> Is this possible with ChatGPT? No, becaue if you address the same
>> question to CHatGPT, you will have a different answer.
>>
>> In this case how the people verificaying the information can check that
>> the editor did not invent the result?
>>
>> Kind regards
>>
>> On 17/05/2023 09:08, Kiril Simeonovski wrote:
>> > Dear Wikimedians,
>> >
>> > Two days ago, a participant in one of our edit-a-thons consulted
>> > ChatGPT when writing an article on the Macedonian Wikipedia that did
>> > not exist on any other language edition. ChatGPT provided some output,
>> > but the problem was how to cite it.
>> >
>> > The community on the Macedonian Wikipedia has not yet had a discussion
>> > on this matter and we do not have any guidelines. So, my main
>> > questions are the following:
>> >
>> > * Can ChatGPT be used as a reliable source and, if yes, how would the
>> > citation look like?
>> >
>> > * Are there any ongoing community discussions on introducing guidelines?
>> >
>> > My personal opinion is that ChatGPT should be avoided as a reliable
>> > source, and only the original source where the algorithm gets the
>> > information from should be used.
>> >
>> > Best regards,
>> > Kiril
>> >
>> > ___
>> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines 
>> > at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
>> > https://meta.wikimedia.org/wiki/Wikimedia-l
>> > Public archives at 
>> > https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
>> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
>>
>> --
>> Ilario Valdelli
>> Wikimedia CH
>> Verein zur Förderung Freien Wissens
>> Association pour l’avancement des connaissances libre
>> Associazione per il sostegno alla conoscenza libera
>> Switzerland - 8008 Zürich
>> Wikipedia: Ilario
>> Skype: valdelli
>> Tel: +41764821371
>> http://www.wikimedia.ch
>>
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at 
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
> To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/DNOFFTF2DECPFETILCWBOVT5AD63R3UH/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Kiril Simeonovski
Thank you everyone for your input.

Your considerations are very similar to mine, and they give a clear
direction towards what the guidelines regarding the use of ChatGPT should
point to.

Best regards,
Kiril

On Wed, 17 May 2023 at 10:11, Ilario valdelli  wrote:

> Define "reliable source".
>
> A source is reliable if can be consulted by other people than the editor
> to check the content.
>
> Is this possible with ChatGPT? No, becaue if you address the same
> question to CHatGPT, you will have a different answer.
>
> In this case how the people verificaying the information can check that
> the editor did not invent the result?
>
> Kind regards
>
> On 17/05/2023 09:08, Kiril Simeonovski wrote:
> > Dear Wikimedians,
> >
> > Two days ago, a participant in one of our edit-a-thons consulted
> > ChatGPT when writing an article on the Macedonian Wikipedia that did
> > not exist on any other language edition. ChatGPT provided some output,
> > but the problem was how to cite it.
> >
> > The community on the Macedonian Wikipedia has not yet had a discussion
> > on this matter and we do not have any guidelines. So, my main
> > questions are the following:
> >
> > * Can ChatGPT be used as a reliable source and, if yes, how would the
> > citation look like?
> >
> > * Are there any ongoing community discussions on introducing guidelines?
> >
> > My personal opinion is that ChatGPT should be avoided as a reliable
> > source, and only the original source where the algorithm gets the
> > information from should be used.
> >
> > Best regards,
> > Kiril
> >
> > ___
> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
>
> --
> Ilario Valdelli
> Wikimedia CH
> Verein zur Förderung Freien Wissens
> Association pour l’avancement des connaissances libre
> Associazione per il sostegno alla conoscenza libera
> Switzerland - 8008 Zürich
> Wikipedia: Ilario
> Skype: valdelli
> Tel: +41764821371
> http://www.wikimedia.ch
>
>
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/4L4K2BUD3YYTAKN6JPHVSSVGOFHW5AKG/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Ilario valdelli

Define "reliable source".

A source is reliable if can be consulted by other people than the editor 
to check the content.


Is this possible with ChatGPT? No, becaue if you address the same 
question to CHatGPT, you will have a different answer.


In this case how the people verificaying the information can check that 
the editor did not invent the result?


Kind regards

On 17/05/2023 09:08, Kiril Simeonovski wrote:

Dear Wikimedians,

Two days ago, a participant in one of our edit-a-thons consulted 
ChatGPT when writing an article on the Macedonian Wikipedia that did 
not exist on any other language edition. ChatGPT provided some output, 
but the problem was how to cite it.


The community on the Macedonian Wikipedia has not yet had a discussion 
on this matter and we do not have any guidelines. So, my main 
questions are the following:


* Can ChatGPT be used as a reliable source and, if yes, how would the 
citation look like?


* Are there any ongoing community discussions on introducing guidelines?

My personal opinion is that ChatGPT should be avoided as a reliable 
source, and only the original source where the algorithm gets the 
information from should be used.


Best regards,
Kiril

___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org


--
Ilario Valdelli
Wikimedia CH
Verein zur Förderung Freien Wissens
Association pour l’avancement des connaissances libre
Associazione per il sostegno alla conoscenza libera
Switzerland - 8008 Zürich
Wikipedia: Ilario
Skype: valdelli
Tel: +41764821371
http://www.wikimedia.ch
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/BHEH6R7OPS66VGRA2IIZNRCG3IWPGVN6/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread Anton Protsiuk
Hi Kiril,

Thanks for raising an interesting topic.

On the first question – ChatGPT obviously shouldn't be used as a reliable
source; for various reasons, but primarily because it's a text generator
that tends to confidently present completely factually incorrect
information. Even the notion of "consulting ChatGPT" when writing an
article shouldn't be used. (Though I believe that it can be beneficial for
supplementary tasks when used with caution, such as helping proofread text
& spot spelling mistakes).

On the second question – there's a lot of active discussion on this topic
on English Wikipedia. I mostly haven't followed it, but can point you to
this draft policy (and, of course, its talk page):
https://en.wikipedia.org/wiki/Wikipedia:Large_language_models

Best Regards
Anton Protsiuk

On Wed, May 17, 2023 at 10:22 AM David Gerard  wrote:

> I've mentioned AI text generators on English Wikipedia's Reliable
> Sources Noticeboard a couple of times, and the consensus each time has
> been that it's obvious that this rubbish absolutely doesn't belong in
> en:wp in any manner. The discussions are how to deal with publishers
> who indulge in this nonsense. So yes, I would suggest a text generator
> could never be used as a source in this manner. The most unreliable of
> sources.
>
>
>
> - d.
>
> On Wed, 17 May 2023 at 08:08, Kiril Simeonovski
>  wrote:
> >
> > Dear Wikimedians,
> >
> > Two days ago, a participant in one of our edit-a-thons consulted ChatGPT
> when writing an article on the Macedonian Wikipedia that did not exist on
> any other language edition. ChatGPT provided some output, but the problem
> was how to cite it.
> >
> > The community on the Macedonian Wikipedia has not yet had a discussion
> on this matter and we do not have any guidelines. So, my main questions are
> the following:
> >
> > * Can ChatGPT be used as a reliable source and, if yes, how would the
> citation look like?
> >
> > * Are there any ongoing community discussions on introducing guidelines?
> >
> > My personal opinion is that ChatGPT should be avoided as a reliable
> source, and only the original source where the algorithm gets the
> information from should be used.
> >
> > Best regards,
> > Kiril
> > ___
> > Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> > Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> > To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/ETULXH7VPWVOTOE73RPPAP7MBSTMNJ3I/
> To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
>
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/PNDMO2QELPSHXAXGKHIWZ7LUNHMVPZAJ/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[Wikimedia-l] Re: ChatGPT as a reliable source

2023-05-17 Thread David Gerard
I've mentioned AI text generators on English Wikipedia's Reliable
Sources Noticeboard a couple of times, and the consensus each time has
been that it's obvious that this rubbish absolutely doesn't belong in
en:wp in any manner. The discussions are how to deal with publishers
who indulge in this nonsense. So yes, I would suggest a text generator
could never be used as a source in this manner. The most unreliable of
sources.



- d.

On Wed, 17 May 2023 at 08:08, Kiril Simeonovski
 wrote:
>
> Dear Wikimedians,
>
> Two days ago, a participant in one of our edit-a-thons consulted ChatGPT when 
> writing an article on the Macedonian Wikipedia that did not exist on any 
> other language edition. ChatGPT provided some output, but the problem was how 
> to cite it.
>
> The community on the Macedonian Wikipedia has not yet had a discussion on 
> this matter and we do not have any guidelines. So, my main questions are the 
> following:
>
> * Can ChatGPT be used as a reliable source and, if yes, how would the 
> citation look like?
>
> * Are there any ongoing community discussions on introducing guidelines?
>
> My personal opinion is that ChatGPT should be avoided as a reliable source, 
> and only the original source where the algorithm gets the information from 
> should be used.
>
> Best regards,
> Kiril
> ___
> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
> https://meta.wikimedia.org/wiki/Wikimedia-l
> Public archives at 
> https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
> To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/ETULXH7VPWVOTOE73RPPAP7MBSTMNJ3I/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org


[Wikimedia-l] ChatGPT as a reliable source

2023-05-17 Thread Kiril Simeonovski
Dear Wikimedians,

Two days ago, a participant in one of our edit-a-thons consulted ChatGPT
when writing an article on the Macedonian Wikipedia that did not exist on
any other language edition. ChatGPT provided some output, but the problem
was how to cite it.

The community on the Macedonian Wikipedia has not yet had a discussion on
this matter and we do not have any guidelines. So, my main questions are
the following:

* Can ChatGPT be used as a reliable source and, if yes, how would the
citation look like?

* Are there any ongoing community discussions on introducing guidelines?

My personal opinion is that ChatGPT should be avoided as a reliable source,
and only the original source where the algorithm gets the information from
should be used.

Best regards,
Kiril
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/WMGIBNPN5JNJGUOCLWFCCPD7EL5YN6KU/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org