[Wikitech-l] Re: Global Lua modules and templates

2023-08-29 Thread Felipe Schenone
Hi! Thanks for the amazing review and encouragement!
I just implemented the "promise sequencing" you suggested. Very neat
pattern!
I also did some improvements to the error handling, including an earlier
detection of logged-out users.
Hopefully your support will encourage more Lua developers to start using
the tool. :-)
Unfortunately we can't track usage until
https://phabricator.wikimedia.org/T343104 is fixed, but I trust it will be
soon.
Kind regards,

On Tue, Aug 29, 2023 at 4:03 PM Krinkle  wrote:

> I think this is one of the most awesome things I've seen in a long time.
> Has my full support! Thank you for building this and sharing it with
> everyone!
>
> I've been asked whether we should be concerned about this gadget in terms
> of site load or site performance. I'll share my analysis here also for
> Felipe's benefit and for transparency to others.
>
> *== Performance ==*
>
> Firstly, I'll look at potential database and web server load. What is the
> exposure of this feature? How many queries does it make for a typical
> interaction? Are those queries expensive? Are they well-cached within
> MediaWiki? Are they cachable at the HTTP level (e.g CDN/Varnish cache)?
>
> Secondly, I'll look at how it makes edits. Does it always attribute edits
> to the actor? What security risk might there be? Does it act on stale data?
> Does it prevent conflicts under race conditions? Does it handle rate limits?
>
> *== Server load ==*
>
> The gadget looks well-written and well-guarded. It was easy to figure out
> what it does and how it does it. For a typical interaction, the gadget
> initially makes ~100 requests to the Action API, which one be seen in the
> browser DevTools under the Network tab. It uses the *MediaWiki* *Action
> API* to fetch page metadata and raw page content.
>
> There's a couple of things Synchronizer does that I like:
>
> * The user interface requires specific input in the form of a Wikidata
> entity and a wiki ID. This means upon casually stumbling on the feature, it
> won't do anything until and unless the person has understood what needs to
> be entered and enters that. A good example is shown through placeholder
> text, but you have to actually enter something before continuing.
> * Once submitted, before anything else, Synchronizer checks whether the
> client is logged-in and thus likely to be able to later make edits through
> this gadget. If you're not logged-in, it won't make that burst of 100 API
> requests. Even though a logged-out user could successfully make those API
> requests to render the initial overview, this avoids significant load that
> would only leave the user stranded later on. This removes the vast majority
> of risk right here. In the event of viral linking or misunderstanding of
> what the feature is for, it would have little to no impact.
> * When gathering data, it uses *Wikidata* to discover which wikis are
> known to have a version of what the same template. This greatly limits the
> fan-out to "only" 10-100 wikis as opposed to ~1000 wikis. It also makes
> sense from a correctness standpoint as templates don't have the same name
> on all wikis, and those that do share a name may not be compatible or
> otherwise conceptually the same. Wikidata is awesome for this.
>
> Synchronizer uses the Action API to fetch page metadata and raw page
> content. This kind of request is fairly cheap and benefits from various
> forms of caching within MediaWiki. These API modules don't currently offer
> CDN caching, but, I don't think that's warranted today given they're fast
> enough. If this feature were accessible to logged-out users and if it made
> these queries directly when merely visiting a URL, then we'd need to think
> about HTTP caching to ensure that any spikes in traffic can be absorbed by
> our edge cache data centres.
>
> There is one improvement I can suggest, which is to limit the number of
> concurrent requests. It works fine today but it does technically violate 
> *MediaWiki
> API Etiquette* https://www.mediawiki.org/wiki/API:Etiquette, and may get
> caught in API throttling. To mitigate this, you could tweak the for-loop in
> "getMasterData". This currently starts each `updateStatus` call in parallel
> to each other. If updateStatus returned a Promise, then this loop could
> instead chain together the calls in a linear sequence. For example,
> `sequence = Promise.resolve(); for (…) { sequence = sequence.finally(
> updateStatus.bind( null, module ) ); }`.
>
> *== Editing ==*
>
> For editing, Synchronizer uses all the built-in affordances correctly.
> E.g. mw.Api and mw.ForeignApi
>  as
> exposed by the *mediawiki.api ResourceLoader module*. This gives the
> gadget a stable surface, greatly reduces complexity of the implementation,
> and automatically does all the right things.
>
> I really like that the gadget goes the extra mile of cleverly figuring out
> why a local template that differs from 

[Wikitech-l] Re: Global Lua modules and templates

2023-08-29 Thread Krinkle
I think this is one of the most awesome things I've seen in a long time.
Has my full support! Thank you for building this and sharing it with everyone!

I've been asked whether we should be concerned about this gadget in terms of 
site load or site performance. I'll share my analysis here also for Felipe's 
benefit and for transparency to others.

*== Performance ==*
**
Firstly, I'll look at potential database and web server load. What is the 
exposure of this feature? How many queries does it make for a typical 
interaction? Are those queries expensive? Are they well-cached within 
MediaWiki? Are they cachable at the HTTP level (e.g CDN/Varnish cache)?

Secondly, I'll look at how it makes edits. Does it always attribute edits to 
the actor? What security risk might there be? Does it act on stale data? Does 
it prevent conflicts under race conditions? Does it handle rate limits?

*== Server load ==*

The gadget looks well-written and well-guarded. It was easy to figure out what 
it does and how it does it. For a typical interaction, the gadget initially 
makes ~100 requests to the Action API, which one be seen in the browser 
DevTools under the Network tab. It uses the *MediaWiki* *Action API* to fetch 
page metadata and raw page content.

There's a couple of things Synchronizer does that I like:

* The user interface requires specific input in the form of a Wikidata entity 
and a wiki ID. This means upon casually stumbling on the feature, it won't do 
anything until and unless the person has understood what needs to be entered 
and enters that. A good example is shown through placeholder text, but you have 
to actually enter something before continuing.
* Once submitted, before anything else, Synchronizer checks whether the client 
is logged-in and thus likely to be able to later make edits through this 
gadget. If you're not logged-in, it won't make that burst of 100 API requests. 
Even though a logged-out user could successfully make those API requests to 
render the initial overview, this avoids significant load that would only leave 
the user stranded later on. This removes the vast majority of risk right here. 
In the event of viral linking or misunderstanding of what the feature is for, 
it would have little to no impact.
* When gathering data, it uses *Wikidata* to discover which wikis are known to 
have a version of what the same template. This greatly limits the fan-out to 
"only" 10-100 wikis as opposed to ~1000 wikis. It also makes sense from a 
correctness standpoint as templates don't have the same name on all wikis, and 
those that do share a name may not be compatible or otherwise conceptually the 
same. Wikidata is awesome for this.

Synchronizer uses the Action API to fetch page metadata and raw page content. 
This kind of request is fairly cheap and benefits from various forms of caching 
within MediaWiki. These API modules don't currently offer CDN caching, but, I 
don't think that's warranted today given they're fast enough. If this feature 
were accessible to logged-out users and if it made these queries directly when 
merely visiting a URL, then we'd need to think about HTTP caching to ensure 
that any spikes in traffic can be absorbed by our edge cache data centres.

There is one improvement I can suggest, which is to limit the number of 
concurrent requests. It works fine today but it does technically violate 
*MediaWiki API Etiquette* https://www.mediawiki.org/wiki/API:Etiquette, and may 
get caught in API throttling. To mitigate this, you could tweak the for-loop in 
"getMasterData". This currently starts each `updateStatus` call in parallel to 
each other. If updateStatus returned a Promise, then this loop could instead 
chain together the calls in a linear sequence. For example, `sequence = 
Promise.resolve(); for (…) { sequence = sequence.finally( updateStatus.bind( 
null, module ) ); }`.

*== Editing ==*

For editing, Synchronizer uses all the built-in affordances correctly. E.g. 
mw.Api and mw.ForeignApi 
 as exposed 
by the *mediawiki.api ResourceLoader module*. This gives the gadget a stable 
surface, greatly reduces complexity of the implementation, and automatically 
does all the right things.

I really like that the gadget goes the extra mile of cleverly figuring out why 
a local template that differs from the central one. For example, does the local 
copy match one of the previous versions of the central template? Or was it 
locally forked in a new direction? It then also allows you to preview a diff 
before syncing any given wiki. This is really powerful and empowers people to 
understand their actions before doing it.

To show what could happen if someone didn't use the provided mw.Api JS utility 
and naively made requests directly to the API endpoint:
* edit() takes care of preventing unresolved *edit conflicts*. It uses the Edit 
API's `basetimestamp` parameter, so that if someone else edited the 

[Wikitech-l] Fwd: [Wikitech-I] Call for Projects and Mentors for Outreachy Round 27!

2023-08-29 Thread Onyinyechi Onifade
Hey Everyone,


The deadline to submit projects on the Outreachy website is fast
approaching! If you’re interested in mentoring a project through
Outreachy, share
them here as a sub-task of this Phabricator task:
https://phabricator.wikimedia.org/T343871  before September 29th. We’re
looking forward to receiving your projects!


Regards,

Onyinye & Shiela (Wikimedia Org Admins for Outreachy Round 27)



-- Forwarded message -
From: Onyinyechi Onifade 
Date: Thu, Aug 10, 2023 at 4:44 PM
Subject: [Wikitech-I] Call for Projects and Mentors for Outreachy Round 27!
To: , , <
wikid...@lists.wikimedia.org>
Cc: 


Hello everyone,

TLDR; Wikimedia is participating in the Outreachy Round 27 internship
program  [1].
Outreachy's goal is to support people from groups underrepresented in the
technology industry. Interns will work remotely with mentors from our
community. We are seeking mentors to propose projects that Outreachy
interns can work on during their internship. If you have some ideas for coding
or non-coding (design, documentation, translation, outreach, research)
projects, share them by Sept. 29, 2023 at 4 pm UTC here as a subtask of
this parent task:  [2]

Program Timeline

As a mentor, you engage potential candidates in the application period
between October–November (winter round) and help them make small
contributions to your project. You work more closely with the accepted
candidates during the internship period between December–March (winter
round).

Important dates are:

   -

   Aug. 22, 2023 at 4pm UTC - Live Q for Outreachy mentors
   
   -

   September 29, 2023 at 4pm UTC - Project submission deadline
   


Guidelines for Crafting Project Proposals

* Follow this task description template when you propose a project in
Phabricator: <
https://phabricator.wikimedia.org/tag/outreach-programs-projects> [3]. You
can also use this workboard to pick an idea if you don't have one already.
Add #Outreachy (Round 27) tag.

* Project should require an experienced developer ~15 days and a newcomer
~3 months to complete.

* Each project should have at least two mentors, including one with a
technical background.

* Ideally, the project has no tight deadlines, a moderate learning curve,
and fewer dependencies on Wikimedia's core infrastructure. Projects
addressing the needs of a language community are most welcome.


Learn more about the roles and responsibilities of mentors on
MediaWiki.org:  [4][5]



Cheers,

Onyinye & Sheila (Wikimedia Org Admins for Outreachy Round 27)



[1] https://www.mediawiki.org/wiki/Outreachy/Round_27


[2] https://phabricator.wikimedia.org/T343871


[3] https://phabricator.wikimedia.org/tag/outreach-programs-projects/


[4] https://www.mediawiki.org/wiki/Outreachy/Mentors


[ 5]  https://www.outreachy.org/mentor/mentor-faq
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Help needed in finding a replacement to the mobile-friendly page content Rest API for my web app

2023-08-29 Thread Mateus Santos
Thanks Srishti for taking the first pass answering this question.

I've created a Wikitech page to centralize the information about
alternatives and reasoning behind the MCS deprecation
https://wikitech.wikimedia.org/wiki/MCS_decommission

I hope that helps, and we can use it to add more information as we learn
moving forward.

Cheers,
Mateus Santos

On Tue, Aug 29, 2023 at 4:12 AM Srishti Sethi  wrote:

> I see that one of the developers has shared a link already in response to
> your question on Telegram:
> https://www.mediawiki.org/wiki/Specs/HTML/2.8.0#Headings_and_Sections09:35
> PM
>
> *Srishti Sethi*
> Senior Developer Advocate
> Wikimedia Foundation 
>
>
>
> On Sun, Aug 27, 2023 at 7:59 PM  wrote:
>
>> Help needed in finding a replacement to the mobile-friendly page content
>> Rest API for my web app
>> Hello All,
>> For my website webnomads.in I have been using mobile-friendly page
>> content Rest API  for years . Now that it has been stopped I am struggling
>> to find a new suitable API as I was displaying the content in sections .
>> On various pages I read about making use of parsoids .
>> While I have explored the different API offerings , I could not find any
>> API which is delivering the text based on sections . I will be very
>> thankful if someone can guide me or point me to any article showing the use
>> of Parsoid or any other way to deliver page content in text form and in
>> sections .
>> Thanks in advance .
>> Regards
>> Harsh
>> ___
>> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
>> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
>>
>> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>>
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/