I really enjoy this discussion.

As it is the latency between human curiosity and insight(s), Wikipedia is 
losing out to AI chatbots.

The key in the AI chatbots is the UI, which feeds your curiosity by imitating 
human-to-human dialogue.

Would it be possible to build something like this to Wikipedia, too? I would 
love to have ”chat” with Wikipedia.


  *
Teemu

Lähettäjä: Ori Livneh <[email protected]>
Päivämäärä: sunnuntaina, 18. tammikuuta 2026 klo 0.21
Vastaanottaja: Wikimedia Mailing List <[email protected]>
Aihe: [Wikimedia-l] Re: Wikipedia at 25: A Wake-Up Call (essay)

On Fri, Jan 16, 2026 at 1:35 AM Erik Moeller 
<[email protected]<mailto:[email protected]>> wrote:
A huge reason readers may prefer AI summaries, even if inaccurate, is
to get to an answer more quickly (the same reason Wikipedia itself
outperformed other information sources even when its quality was still
very uneven.)

Yes. It's latency. It was always latency. "Wiki" means quick.

Wikipedia made the sum of human knowledge (or its arithmetic mean, anyway) 
accessible near you: in your home, in your backpack, in your pocket, on your 
person. It shortened the distance from question to answer by abstracting the 
trip to the library or bookshelf, the time spent poring over the table of 
contents, and the wait for the newspaper to arrive, for a new textbook edition, 
or for the translation to appear in your language.

For performance engineers like me, Wikipedia's "end-to-end latency" is the time 
between a reader clicking a link and the article fully rendering on their 
device. For many years, I believed the key to Wikipedia's continued relevance 
was shaving milliseconds off this number by tuning Wikimedia's code and 
infrastructure.

But true end-to-end latency is not measured between server and browser, but 
between curiosity and insight. And it turns out that network and code latency 
contribute only modestly to that number. The milliseconds it takes for 
Wikimedia's servers to transmit an article to your device are dwarfed by the 
time you need to wrack your brain for the right terms to query, locate the 
relevant section of the article, interpret its meaning, and relate it to your 
question.

Wikipedia improved on Britannica's end-to-end latency by several orders of 
magnitude. Modern AI is now doing the same to Wikipedia. I can describe to 
Gemini what I want to know using vague, imprecise, or even incorrect terms, and 
it tells me what I might be thinking of, using my language: not merely the 
language listed in my Babel userbox, but terms I understand that relate to 
concepts I already know and are appropriate to my level of understanding.

At its worst, AI generates hallucinated, sycophantic slop. But at its best, it 
is an interface to human knowledge that is not merely incrementally faster than 
browsing Wikipedia, but categorically faster.

I think the key to ensuring the future knowledge infrastructure remains free 
and open is to once again beat closed, commercial platforms on latency, ideally 
by an order of magnitude or more. This is possible, if you again consider true 
end-to-end latency and the invisible factors that contribute to it, like the 
time it takes to distinguish truth from falsehood, and information from 
manipulation.

I'm not sure Wikimedia should lead the charge. Even if their relevance is 
fading somewhat, the projects are an immense trove of value for humanity. Any 
rash effort to remake them from within is likely to destroy more value than it 
creates. But there is plenty of room out there for new things.

I'm glad to see you experimenting in this space, Eric.
_______________________________________________
Wikimedia-l mailing list -- [email protected], guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/[email protected]/message/UOAYYQLZNSVXQBOGLV2OSVX2V5SY2NUD/
To unsubscribe send an email to [email protected]

Reply via email to