[Wikitech-l] Re: Announcing: Path review board

2023-10-04 Thread bawolff
So unfortunately interest in the patch review board seems to have died off

I believe when something doesn't work out, it is good to have a
retrospective. With that in mind, please add thoughts about the patch
review board to
https://www.mediawiki.org/wiki/Talk:Code_review/Patch_board#Retrospective

Thanks,
Brian

On Thu, Feb 2, 2023 at 2:29 PM Brett Cornwall 
wrote:

> On 2023-02-02 14:40, Brian Wolff wrote:
> >On Thursday, February 2, 2023, Brett Cornwall 
> >wrote:
> >
> >> On 2023-02-02 11:33, Zoran Dori wrote:
> >>
> >>> Amazing to hear about this, I'm hoping that this will improve the
> number
> >>> of
> >>> merged patches which are stuck on waiting for review.
> >>>
> >>
> >> It also might be important to keep contributors motivated - nothing is
> >> more demoralizing than having a patch sitting for years.
> >>
> >
> >Indeed. I've certainly felt that way at times.
>
> I'm so sorry to hear that you've experienced that. Hopefully this is one
> step towards making the experience better.
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: New developer feature: $wgUseXssLanguage / x-xss language code

2023-09-29 Thread bawolff
This is clearly yielding some interesting results.

One of the patterns i've noticed is that several of the examples seem to
involve mustache templates. I think there are two reasons for this:

* mustache templates cannot currently be checked by phan-taint-check
* Because they are a separate file, the escaping is now fairly far away
from the context where the variable is used. Its easy to lose track of if a
specific variable is supposed to be escaped between the template file and
the call into TemplateProcessor.

Anyways, i'd like to propose a naming convention. Any mustache variable
that is used as raw html should have some sort of easily identifiable
prefix so it is easy to keep track of which parameters are escaped and
which are not. e.g. instead of naming the parameter foo, it would be named
something like HTMLFoo.

Thoughts?
--
Brian


On Thu, Sep 28, 2023 at 9:01 AM Lucas Werkmeister <
lucas.werkmeis...@wikimedia.de> wrote:

> Hi all! This is an announcement for a new developer feature in MediaWiki.
> If you don’t develop MediaWiki core, extensions or skins, you can stop
> reading :)
>
> MediaWiki interface messages are generally “safe” to edit: when they
> contain markup, it is either parsed (as wikitext), sanitized, or fully
> HTML-escaped. For this reason, administrators are allowed to edit normal
> messages on-wiki in the MediaWiki: namespace, while editing JS code (which
> is more dangerous) is restricted to interface administrators. (A few
> exceptions, messages that are not escaped and which can only be edited by
> interface administrators, are configured in $wgRawHtmlMessages
> .)
> Occasionally, a bug in the software means that a message isn’t properly
> escaped, which can in theory be abused by administrators to effectively
> gain interface administrator powers (by editing a MediaWiki: page for a
> message to contain 

[Wikitech-l] Re: MediaWiki Extensions and Skins Security Release Supplement (1.35.11/1.38.7/1.39.4/1.40.0)

2023-07-03 Thread bawolff
> Wikibase
>+ (T339111, CVE-2023-37302) - Style injection into badges on Wikidata due
to unescaped quotes.
> https://gerrit.wikimedia.org/r/c/933649
<https://gerrit.wikimedia.org/r/c/933649>
> https://gerrit.wikimedia.org/r/c/933650

It should be noted that the description of this issue is incorrect. It is
an XSS not just a style injection.

--
bawolff
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: gotointerwiki-external

2023-06-01 Thread bawolff
Ensure that the iw_local field is set to 1. Otherwise the interwiki is not
considered safe for automatic redirection.

--
Brian

On Thu, Jun 1, 2023 at 5:05 AM Bináris  wrote:

> Sorry for disturbing, but I did not find the answer in the docs.
>
> I installed a MW  1.39.3 with an automated tool. Then I inserted a row
> into the interwiki table to make 'hu:' interwiki to recognize huwiki.
>
> Now I write hu:anything into the search box, and get a message that I want
> to leave my wiki. Of course, I know, that's why I did it.
> What I could dig from translatewiki that the name of this message is
> 'gotointerwiki-external'.
> How could I switch this feature off and make my wiki just to go to the
> desired link?
>
> --
> Bináris
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Extension! Re: Wikimedia developer satisfaction survey 2023 六‍

2023-06-01 Thread bawolff
Will the results for this be posted soon? It has been more than 3 months.

--
Brian

On Fri, Feb 17, 2023 at 8:30 AM Tyler Cipriani 
wrote:

>
> *Good news—✨you've got another week!✨*
> We're extending the deadline to submit your answers to this year's
> Developer Satisfaction Survey *until Fri, 24 Feb 2023!*
>
> As of yesterday we have:
>
>- 130 complete responses
>- Roughly a 70%/30% staff/volunteer split
>
> We're hoping for 150 responses this year and we're s close.
>
> Thank you to those who've submitted surveys so far.
>
> Everyone else: you've got another week!
>
> Thanks!
> – Tyler
>
> On Fri, Feb 3, 2023 at 2:32 PM Tyler Cipriani 
> wrote:
>
>> Hello!
>>
>> Please take our annual* *Developer Satisfaction Survey*!
>>
>> Link: https://wikimediafoundation.limesurvey.net/484133
>>
>> The survey is open until Fri, 17 Feb 2023—two weeks from today.
>>
>> 
>>
>> This survey is for members of the *Wikimedia Developer Community* and
>> covers the following topics:
>>
>>
>>-
>>
>>Code review tooling and process
>>-
>>
>>Code quality
>>-
>>
>>Phabricator
>>-
>>
>>Continuous Integration
>>-
>>
>>MediaWiki development environments
>>-
>>
>>Beta cluster / Staging
>>
>>
>> Please take the survey if you’ve used the above tools as part of your
>> role developing software for the Wikimedia community.
>>
>> We’re soliciting your feedback to:
>>
>>-
>>
>>Measure developer satisfaction, and
>>-
>>
>>determine where to invest resources in the future
>>
>>
>> We will anonymize, explore, and report the data we gather on
>> mediawiki.org. View previous years' survey results:
>>
>> https://www.mediawiki.org/wiki/Developer_Satisfaction_Survey
>>
>> Privacy statement: This survey will be conducted via a third-party
>> service, which may subject it to additional terms. For more information on
>> privacy and data-handling, see the survey privacy statement
>> 
>> .
>>
>> Thank you!
>>
>> Tyler Cipriani (he/him)
>> Engineering Manager, Release Engineering
>> Wikimedia Foundation
>> 
>>
>> *: “annual,” except we missed 2022 
>>
>> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Reflecting on my listening tour

2023-04-13 Thread bawolff
Thank you for this email. I appreciate your effort to tackle difficult
problems head on and in recognizing our problems are socio-technical, not
just technical. This email is probably one of the most reassuring things I
have read from someone in WMF management in a very long time. There were
some parts I wanted to give my 2 cents on.

> "I think there are lots of promising opportunities to incentivise people
to pay off technical debt and make our existing stack more sustainable.
Right now there are no incentives for engineers in this regard."

Interesting. Personally to me, it can sometimes feel like we never stop
talking about technical debt. While I think paying off technical debt is
important, at times I feel like we've swung in the opposite direction where
we are essentially rewriting things for the sake of rewriting things.

> "I strongly believe that Wikipedia will be obsolete by 2030 if we don’t
fix MediaWiki now."

A bold statement. And I appreciate that these headers are meant to trigger
thoughts and not be a firm arguments. However, I would certainly be curious
though as to the "why" of that statement, in your view.

> What is our strategy for MediaWiki support? Today there is a tug of war
about whether we should support MediaWiki for third-party users, even
though their use cases have diverged significantly from those of Wikimedia
projects. I’m planning a MediaWiki convening in late 2023 to begin tackling
this issue.

I would disagree that third party use cases have diverged at all from
Wikimedia projects, let alone significantly. (Speaking generally. Of course
the wide internet includes people doing crazy things). Perhaps this is not
the time to discuss this, but I would certainly be curious to know what
people in WMF think the third party use case is that doesn't match with
Wikimedia's.

> What will it take to have impact at scale?

Aren't we already at scale? (With perhaps the exception of WDQS, which does
have significant scaling issues on the horizion). Our exponential growth
phase is long over and we have largely come out on the other side. Unless
you mean scaling our social decision making structures, in which case I
would agree that we have failed to scale that effectively in the past.

> how can we think even bigger, and question elephants in the room, which
in part would be to examine the long-standing and seemingly unquestioned
assumption that MediaWiki is the best software to solve all problems we face

I disagree that it is unquestioned. People have talked about this in the
past. However usually it goes nowhere because (imho) the people who in the
past have brought it up often don't fully understand how MediaWiki's is
actually used and suggested solutions that don't really fit our needs. So
perhaps you're right that it hasn't been seriously brought up. However,
replacing software is much like rewriting it. There's a lot of hidden
gotchas.

>Some of the product senior leadership in the recent past have specifically
avoided talking directly with people on English Wikipedia, and this
approach will no longer be applied. Engaging human to human is the best way
I know to help resolve some of the mystery, fear and anger that are present.

This is excellent. The hiding behind an anonoymous corporate fascade is a
major contibuting factor to so many problems (not all of them of course).

> However, that will absolutely not fix what’s wrong here. We need systemic
solutions. Today, there’s no way to make lasting and mutually binding
agreements with volunteers, and that isn’t a sustainable way to create and
maintain infrastructure software.

There can't be any agreement without consent, and there can't be any
consent when WMF ignores answers when they are inconvinent. Good luck with
this, it will be hard. Fixing the relationship with the community is
probably the most effective thing you could hope to accomplish. I think its
possible, but we got to the place we are now over many years and decades of
broken trust, which will take time to rebuild.

I think an underappreciated factor here, is there are cultural differences
between WMF staff and communities. How they talk. How they solve disputes.
What motivates them. What sounds inauthentic. What statements sound
insulting. etc. Some problems really do just start as miscommunication
between people of different cultural backgrounds failing to communicate.
Its been a long time since I worked at the foundation, so perhaps things
have changed, but from what i remember, there was very little to prepare
staff that don't have a community background how to "fit in" to the
community. After all, how could you trust someone to redesign Wikipedia who
doesn't even seem to know the basics of how Wikipedia works? Even simple
things like failing to sign a talk page post rapidly signals that the
person is an outsider who knows nothing of what it is like to edit the
site. Anyways, one thing I'd like to see is more cultural (for lack of a
better word) training to 

[Wikitech-l] Re: FYI - WMF Product & Technology annual planning 2023/24 - snapshot of work in progress

2023-04-12 Thread bawolff
I just wanted to say, that I really appreciate that this is being drafted
in the open and providing opportunities for public feedback.

Thanks,
Brian

On Wed, Apr 12, 2023 at 8:46 AM Liam Wyatt  wrote:

> Dear all,
>
> Back in February, the first public steps in the WMF's Annual Planning
> process were published – documenting its Product & Technology
> departments' draft work portfolios
> 
>  (nominally
> called "buckets").
>
> Today, *Part 2* of this documentation is now available, *covering the
> departments' proposed Objectives and Key Results *(OKRs) in a more
> comprehensive and tabular, format:
>
>
> https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Annual_Plan/2023-2024/Draft/Product_%26_Technology/OKRs
>
> 
> This draft documentation is published in the spirit of trying to get
> something public, and published, even though it is not finished yet so that
> there is time to seek (and incorporate) feedback.
>
> If you are interested in reading these draft *WMF Product and Tech* OKRs
> and giving your feedback - please add it to the associated subheading on
> the talkpage
> 
> .
>
> Sincerely,
>
> - Liam [Wittylama]
>
> On Thu, 9 Feb 2023 at 20:24, Birgit Müller  wrote:
>
>> Hi All,
>>
>> The Wikimedia Foundation’s Tech & Product departments have published a
>> first overview on current annual planning work for the fiscal year
>> 2023/2024 (July 1st-June 30th). Comments and questions are welcome on the
>> talk page:
>>
>>
>> https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Annual_Plan/2023-2024/Draft/Product_%26_Technology
>>
>>
>> This is an almost real time “snapshot” of the work that is currently
>> underway to develop the Technology & Product annual plan in a
>> cross-departmental effort. The plan is still at the very high level and
>> work in progress - main areas of work (“buckets”), possible annual
>> objectives , initiatives that could
>> help achieve these, first versions of annual key results that could roll
>> into these objectives. Content on the page will change as things evolve,
>> and new thoughts and feedback are incorporated.
>>
>> If you have thoughts or questions at this early stage, please comment on
>> the talk page so that we can keep discussions in one place. If you prefer
>> to wait until things become clearer - this is perfectly fine too!
>>
>>
>> A call for feedback on the specifics of the annual plan is planned for
>> later in the process - this is just a start.
>>
>> Thanks,
>>
>> Birgit
>>
>>
>> --
>> Birgit Müller (she/her)
>> Director of Technical Engagement
>>
>> Wikimedia Foundation 
>> ___
>> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
>> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
>>
>> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: [Proposal] Disable setting the "Lowest" Priority value in Phabricator

2023-02-27 Thread bawolff
On Mon, Feb 27, 2023 at 4:24 AM Antoine Musso  wrote:

> Le 27/02/2023 à 13:05, David Gerard a écrit :
> > Can I just note that however you word it, closing volunteers' good
> > faith bugs because nobody is available from the organisation right now
> > is an excellent way to get them never to file a bug again.
>
> Hello,
>
> Possibly, though laying them open for years with priority "lowest" is
> not ideal either. At least when the task is closed there is a clear
> intent it is not going to be fixed/implemented.
>
>
> Note: the issue applies to all users (volunteers and paid staff from the
> various organizations participating on Phabricator)
>
> --
> Antoine "hashar" Musso


At least in my experience, one of the most demotivating things is when
people don't call a spade a spade - e.g. Gerrit patches that just languish
in silence instead of someone rejecting it and explaining why. A rejection
might sting in the moment, but at least it allows the person to either find
an alternative path to solving their problem, or move on with their life.
Ambiguous silence is so much worse. Of course, that's no excuse to reject
things rudely - rejections should always be polite and compassionate, but
ultimately clear.

The primary reason why I don't support declining things nobody intends to
work on is we're an open project - there is no fixed group of people who
are allowed to work on stuff. Anyone can work on things. I also think
having a list of all known bugs, even the ones that nobody wants to fix,
can be a useful thing in and of itself. From that perspective certain bugs
being lowest forever seem inevitable and reasonable.

--
Brian
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: maybe somebody can just review and approve this commit of script conversion?

2022-10-24 Thread bawolff
It seems like you've gotten reviews but you just don't like the answer.

It is extremely unlikely that anyone is going to merge it in over the
objection of other reviewers. The code needs to be structured so the logic
of what is going on is clear. 3000 lines of regexes with half of them
commented out just doesn't meet expectations for how mediawiki code should
be written.

>it has been said that the file is too big and the commit is too big.
>the code is nearly 3500 lines. i think it is not worth to divide it
>into separate library or files. as library it would be very fit for
>this conversion, not usable for other things. i feel making library as
>"premature optimisation". and laziness, feeling as useless work. also
>just dividing into files, into classes. 3500 lines is not very hard
>for me to browse.

You can feel however you want, but if you want to have the code merged into
mediawiki and deployed to wikipedia, you're going to have to put aside your
feelings, or be content with it staying as an unmerged patchset forever.

--
bawolff

On Mon, Oct 24, 2022 at 9:03 PM dinar qurbanov  wrote:

> hello
>
> can somebody help with reviewing technical aspects of a cyrillic <->
> latin converter ?
>
> it has been said that the file is too big and the commit is too big.
> the code is nearly 3500 lines. i think it is not worth to divide it
> into separate library or files. as library it would be very fit for
> this conversion, not usable for other things. i feel making library as
> "premature optimisation". and laziness, feeling as useless work. also
> just dividing into files, into classes. 3500 lines is not very hard
> for me to browse.
>
> if anybody wants to separate it into classes and files, feel free to
> edit, if you think it is good for mediawiki. i do not want to make
> such change by myself. i can help with proper names (for classes,
> variables).
>
> i said variables because it is possible to convert some lists of php
> commands into for cycles with arrays, just to make it look shorter. i
> think it is also like premature optimisation, i have more freedom when
> they are as they are, in further editing. i do not want to make such
> change by myself. feel free to edit.
>
> quick link to the newest version of the code:
>
> https://gerrit.wikimedia.org/r/c/mediawiki/core/+/164049/219/includes/language/converters/TtConverter.php
> .
>
> also it was suggested to get rid of regex lookahead and lookbehind.
> and strtr was like suggested. i think i probably cannot easily make
> those changes. and i think, is strtr really faster than preg_replace?
> i googled for benchmark results and i do not see, in the first page,
> like 10 results.
>
> the code is big, but it just modifies a string.
>
> tatar wikipedians have asked me about this code, yesterday, and now i
> try this approach, to write to this mailing list, before other ways to
> go. maybe there is just a lack of some people who can review this code
> and approve it, and maybe this mailing list can help.
>
> other ways to go to the problem's pages:
> https://phabricator.wikimedia.org/T27537 ,
> https://gerrit.wikimedia.org/r/c/mediawiki/core/+/164049 .
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Request Timeout

2022-09-07 Thread bawolff
On Wed, Sep 7, 2022 at 3:17 PM Ryan Schmidt  wrote:

>
>
> On Sep 7, 2022, at 7:54 AM, Martin Domdey  wrote:
>
> 
> Hi,
>
> it looks like there is nobody who can work on a bug or production error
> like this: https://phabricator.wikimedia.org/T316858
>
> I don't think, that this is a production error but really a bug. If it's
> not possible to open a page (
> https://de.wikipedia.org/wiki/Benutzer:Doc_Taxon/Test4 ) because of
> running into timeout, the time should be increased to open the page to get
> the content of this page.
>
> Is there really nobody from WMF or a volunteer who can help me with this
> issue? If you know somebody feel free to name him/her for contacting
> him/her.
>
> Thank you very much
> Martin (aka Doc Taxon) ...
>
>
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>
>
> You were told on that task how to access the page contents. Edit the page
> and make it smaller. Timeouts aren’t going to be increased because of
> someone trying to do something borderline malicious such as shelling out to
> Pygments hundreds of times in a page.
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


One possible fix would be to make syntaxhighlight an expensive parser
function ;)
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: [Wikimedia-l] Re: Re: Uplifting the multimedia stack (was: Community Wishlist Survery)

2022-01-10 Thread bawolff
Honestly, I find the "not in the annual plan" thing more damning than the
actual issue at hand.

The core competency of WMF is supposed to be keeping the site running. WMF
does a lot of things, some of them very useful, others less so, but at its
core its mission is to keep the site going. Everything else should be
secondary to that.

It should be obvious that running a 300 TB+ media store servicing 70
billion requests a month requires occasional investment and maintenance

And yet, this was not only not in this year's annual plan, it has been
ignored in the annual plan for many many years. We didn't get to this state
by just 1 year of neglect.

Which raises the question - If wmf is not in the business of keeping the
Wikimedia sites going, what is it in the business of?

On Tue, Jan 11, 2022 at 6:01 AM Kunal Mehta  wrote:

> Hi,
>
> On 1/1/22 12:10, Asaf Bartov wrote:
> > It seems to me there are *very few* people who could change status quo,
> > not much more than a handful: the Foundation's executive leadership (in
> > its annual planning work, coming up this first quarter of 2022), and the
> > Board of Trustees.
>
> If the goal is to get paid WMF staff to fix the issues, then you're
> correct. However, I do not believe that as a solution is healthy
> long-term. The WMF isn't perfect and I don't think it's desirable to
> have a huge WMF that tries to do everything and has a monopoly on
> technical prioritization.
>
> The technical stack must be co-owned by volunteers and paid staff from
> different orgs at all levels. It's significantly more straightforward
> now for trusted volunteers to get NDA/deployment access than it used to
> be, there are dedicated training sessions, etc.
>
> Given that the multimedia stack is neglected and the WMF has given no
> indication it intends to work on/fix the problem, we should be
> recruiting people outside the WMF's paid staff who are interested in
> working on this and give them the necessary access/mentorship to get it
> done. Given the amount of work on e.g. T40010[1] to develop an
> alternative SVG renderer, I'm sure those people exist.
>
> Take moving Thumbor to Buster[2] for example. That requires
> forward-porting some Debian packages written Python, and then testing in
> WMCS that there's no horrible regressions in newer imagemagick, librsvg,
> etc. I'm always happy to mentor people w/r to Debian packaging (and have
> done so in the past), and there are a decent amount of people in our
> community who know Python, and likely others from the Commons community
> who would be willing to help with testing and dealing with whatever
> fallout.
>
> So I think the status quo can be changed by just about anyone who is
> motivated to do so, not by trying to convince the WMF to change its
> prioritization, but just by doing the work. We should be empowering
> those people rather than continuing to further entrench a WMF technical
> monopoly.
>
> [1] https://phabricator.wikimedia.org/T40010
> [2] https://phabricator.wikimedia.org/T216815
>
> -- Legoktm
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
>
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: [Wikitech-l] [MediaWiki-l] The Second round of voting for mediawiki logo just started!

2020-10-08 Thread bawolff
Hi, thanks for the response.

However, i think i owe you a bit of an apology. I was sad the choice i
liked didn't make it final round, and there was a bit of lashing out in my
previous email, which was not cool. Anyways, the voting process wasn't
perfect, but such processes never are, and that is ok.

Cheers,
Brian

On Thursday, October 8, 2020, Amir Sarabadani  wrote:

> (Sorry for late response, this email fell into cracks of my messy inbox)
>
> On Mon, Sep 28, 2020 at 1:53 PM bawolff  wrote:
>
>> TBH, I was under the impression that the second round was going to be
>> narrowing down to top contenders (maybe the 3 or so top designs), not
>> choosing the top contender (I guess that's my fault though, it wasn't
>> stated anywhere that that was going to be the case or anything).
>>
>
> Actually the plan originally was to put anything that passed the basic
> wiki thresholds (70%) but nothing beside the first proposal did. I could
> add the proposal one and the other ones as runner ups but honestly it feels
> weird advancing a logo design that had an opposition for each support.
>
>
>> It was kind of hard to follow the first round with 20 something proposals
>>
>
> It was only 17, didn't even reach 20.
>
> with some of them benefiting from showing up earlier than others, and most
>> of the votes taking place during the time period where votes were allegedly
>> not going to count yet.
>>
>
> That's true, I accept the mess up on my side (I have been planning this
> and asking around for more than a year now but it's not like you coordinate
> such changes on a monthly basis, I'm definitely learning), I tried to
> compensate by giving a full month for the voting period which is pretty
> long plus giving periodic reminders.
>
>
>> I did notice that some of the people voting had never previously edited
>> mediawiki.org (Or made very few previous edits). It kind of feels a
>> little weird to treat this as a "vote" (and not a "consensus" building
>> exercise) if we don't have eligibility criteria.
>>
>
> This is the part that changing the mediawiki logo is different from the
> usual wikimedia decision making process. The reason is that in for example
> English Wikipedia, the biggest venue of contribution is en.wikipedia.org
> but the biggest venue for contributing to mediawiki is not mediawiki.org,
> you make patches, you report bugs, you help people in IRC, and so on. If
> you counted that, I assume a huge portion of the voters would be considered
> eligible if we count venues like phabricator and gerrit. Also for example
> for "picture of the year competition" in Commons, the eligibility is not
> the number of edits in commons. It's the number of edits in any wiki and if
> we want to count the number of edits in any wiki too, then I'm pretty sure
> virtually every voter would be considered eligible.
>
>>
>> I do kind of wish there was a none of the above option.
>>
>
> Proposal four was the status quo, it clearly didn't pass (with 39%)
>
>
>> Looking through the votes, I definitely see some people saying things
>> like "Least bad option", which is not exactly an inspiring show of support.
>>
>
> Well, for each neutral or weak support in the proposal six, there was one
> person who showed "strong support".  And this is the thing with logos, it's
> not like a voting on a policy change, supporting or not supporting a logo
> is a very subjective matter, for a similar situation look at elections,
> there are people who go crazy about a candidate and people who are just
> like "less horrible than the other candidate" and this is normal, people
> are different with different perspective, If I wanted to force my
> perspective, I would have advanced the first proposal too and even in the
> variants for the current proposal, my favorites are not getting anywhere
> but that's okay. Logos have lots of oobjective factors (like accessibility,
> proper abstraction, color consistency, simplicity, brand awareness, etc.)
> but the biggest one is the general look and feel and it differs from person
> to person. That's why companies do extensive A/B testing on design, it
> caused backlash too, for example a lead designer who left Google in protest
> that they were doing A/B testing on forty different shades of blue which
> basically destroyed the artistic freedom of designers (we are not going in
> that direction but we need to acknowledge the subjectivity of designs)
>
>
> HTH
>
>>
>> --
>> Brian
>>
>> On Mon, Sep 28, 2020 at 8:50 AM Amir Sarabadani 
>> wrote:
>>
>>> Hey,
>>> The first round was 

Re: [Wikitech-l] The Second round of voting for mediawiki logo just started!

2020-09-28 Thread bawolff
TBH, I was under the impression that the second round was going to be
narrowing down to top contenders (maybe the 3 or so top designs), not
choosing the top contender (I guess that's my fault though, it wasn't
stated anywhere that that was going to be the case or anything). It was
kind of hard to follow the first round with 20 something proposals, with
some of them benefiting from showing up earlier than others, and most of
the votes taking place during the time period where votes were allegedly
not going to count yet. I did notice that some of the people voting had
never previously edited mediawiki.org (Or made very few previous edits). It
kind of feels a little weird to treat this as a "vote" (and not a
"consensus" building exercise) if we don't have eligibility criteria.

I do kind of wish there was a none of the above option. Looking through the
votes, I definitely see some people saying things like "Least bad option",
which is not exactly an inspiring show of support.

--
Brian

On Mon, Sep 28, 2020 at 8:50 AM Amir Sarabadani  wrote:

> Hey,
> The first round was using the standard voting process in wikis (using
> support/oppose and the thresholds like 70%) and this is the way we elect
> admins, checkusers or other user rights, or change policies in Wikis. I
> don't recall that there has ever been anyone elected as admin with below
> 70% or we have ever changed any policies with below 70% (not to mention the
> runner up logos are 56% and 61%, basically for any support, they had an
> opposition). Our logo is similar, no logo except proposal six could reach
> seventy percent and while there were good designs that almost made it but
> clearly none of them has enough support (and percentage of support) to
> reach the next round. That's a pity (one of the runner ups was actually by
> me) but if that's what the community wants, I happily accept it.
>
> The second round has always been
> 
> about different variants of the logos that pass the first round.
>
> HTH
>
> On Mon, Sep 28, 2020 at 9:30 AM Adam Wight 
> wrote:
>
>> Hi, thanks for helping coordinate this process!
>>
>> I have concerns about what happened between round 1 and round 2, it seems
>> that we're no longer left with a real choice.  It's unclear what method was
>> used to tally the round 1
>> 
>> votes, was this a "support percentage"?  Whenever a vote is taken, it's
>> important to stick to democratic norms, basically "one person, one vote".
>> Round 2 is entirely variations on a single proposal, which disenfranchises
>> everyone who didn't prefer that design.  Is it too late to discuss?
>>
>> Kind regards,
>> Adam
>> On 9/25/20 11:42 PM, Amir Sarabadani wrote:
>>
>> Hello,
>> The subject line is self-explanatory, you can go to the voting page
>> 
>> and cast your vote.
>>
>> This is going to continue for a month and it's about different variants
>> of the top contender (different colors, different wordmarks, etc.). You
>> need to order logos based on your preference (the most preferred one first,
>> the least preferred one the last) and then cast your vote. The final winner
>> will be chosen using Schulze method
>> .
>>
>> If you have mistakenly voted in the test phase, you can just copy your
>> vote from the test page
>>  to the
>> actual voting page
>> 
>> (the numbers of logos haven't changed).
>>
>> Special thank you to Chuck Roslof from WMF legal for doing the
>> preliminary clearance of the proposal.
>>
>> Have a nice weekend!
>> --
>> Amir (he/him)
>>
>>
>> ___
>> Wikitech-l mailing 
>> listWikitech-l@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>
>
> --
> Amir (he/him)
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Ethical question regarding some code

2020-08-08 Thread bawolff
On Sat, Aug 8, 2020 at 9:44 PM John Erling Blad  wrote:

> Please stop calling this an “AI” system, it is not. It is statistical
> learning.
>
>
So in other words, it is an AI system? AI is just a colloquial synonym for
statistical learning at this point.

--
Brian
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ethical question regarding some code

2020-08-05 Thread bawolff
That's a tough question, and I'm not sure what the answer is.

There is a little bit of precedent with
https://www.mediawiki.org/w/index.php?oldid=2533048=Extension:AntiBot

When evaluating harm, I guess one of the questions is how does your
approach compare in effectiveness to other publicly available approaches
like http://www.philocomp.net/humanities/signature.htm &
https://github.com/search?q=authorship+attribution+user:pan-webis-de ?
(i.e. There is more harm if your approach is significantly better than
other already available tools, and less if they're at a similar level)

--
Brian

On Thu, Aug 6, 2020 at 2:33 AM Amir Sarabadani  wrote:

> Hey,
> I have an ethical question that I couldn't answer yet and have been asking
> around but no definite answer yet so I'm asking it in a larger audience in
> hope of a solution.
>
> For almost a year now, I have been developing an NLP-based AI system to be
> able to catch sock puppets (two users pretending to be different but
> actually the same person). It's based on the way they speak. The way we
> speak is like a fingerprint and it's unique to us and it's really hard to
> forge or change on demand (unlike IP/UA), as the result if you apply some
> basic techniques in AI on Wikipedia discussions (which can be really
> lengthy, trust me), the datasets and sock puppets shine.
>
> Here's an example, I highly recommend looking at these graphs, I compared
> two pairs of users, one pair that are not sock puppets and the other is a
> pair of known socks (a user who got banned indefinitely but came back
> hidden under another username). [1][2] These graphs are based one of
> several aspects of this AI system.
>
> I have talked about this with WMF and other CUs to build and help us
> understand and catch socks. Especially the ones that have enough resources
> to change their IP/UA regularly (like sock farms, and/or UPEs) and also
> with the increase of mobile intern providers and the horrible way they
> assign IP to their users, this can get really handy in some SPI ("Sock
> puppet investigation") [3] cases.
>
> The problem is that this tool, while being built only on public
> information, actually has the power to expose legitimate sock puppets.
> People who live under oppressive governments and edit on sensitive topics.
> Disclosing such connections between two accounts can cost people their
> lives.
>
> So, this code is not going to be public, period. But we need to have this
> code in Wikimedia Cloud Services so people like CUs in other wikis be able
> to use it as a web-based tool instead of me running it for them upon
> request. But WMCS terms of use explicitly say code should never be
> closed-source and this is our principle. What should we do? I pay a
> corporate cloud provider for this and put such important code and data
> there? We amend the terms of use to have some exceptions like this one?
>
> The most plausible solution suggested so far (thanks Huji) is to have a
> shell of a code that would be useless without data, and keep the code that
> produces the data (out of dumps) closed (which is fine, running that code
> is not too hard even on enwiki) and update the data myself. This might be
> doable (which I'm around 30% sure, it still might expose too much) but it
> wouldn't cover future cases similar to mine and I think a more long-term
> solution is needed here. Also, it would reduce the bus factor to 1, and
> maintenance would be complicated.
>
> What should we do?
>
> Thanks
> [1]
>
> https://commons.wikimedia.org/wiki/File:Word_distributions_of_two_users_in_fawiki_1.png
> [2]
>
> https://commons.wikimedia.org/wiki/File:Word_distributions_of_two_users_in_fawiki_2.png
> [3] https://en.wikipedia.org/wiki/Wikipedia:SPI
> --
> Amir (he/him)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fixing rule via PHPCBF

2020-03-01 Thread bawolff
In theory, you can use the --sniffs option to specify specific sniffs. See
phpcbf --help

--
Brian


On Mon, Mar 2, 2020 at 7:29 AM Zoran Dori  wrote:

> Hello,
> is possible to fix specific rule via phpcbf?
>
> Best regards,
>
> Zoran Dori
> volunteer, Wikimedia Serbia
> s: zoranzoki21.github.io e: zorandori4...@gmail.com
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki API pageview issue

2020-02-24 Thread bawolff
On Tue, Feb 25, 2020 at 1:27 AM MusikAnimal  wrote:

> Unfortunately there's no proper log of redirect changes (I recently filed <
> https://phabricator.wikimedia.org/T240065> for this). There are change
> tags
>  that identify redirect changes
> -- "mw-new-redirect" and "mw-changed-redirect-target", specifically -- but
> I am not sure if this is easily searchable via the action API. Someone on
> this list might know.
>

You can do
https://en.wikipedia.org/w/api.php?action=query=2019%E2%80%9320%20Wuhan%20coronavirus%20outbreak=revisions=timestamp|tags|ids|content=max=mw-new-redirect=2=main
or
https://en.wikipedia.org/w/api.php?action=query=2019%E2%80%9320%20Wuhan%20coronavirus%20outbreak=revisions=timestamp|tags|ids|content=max=mw-changed-redirect-target=2=main
(You cannot do both in one query, you can only specify one tag at a time).
Furthermore, it looks like given a revision id, you would have to determine
where it redirects yourself, which is unfortunate. I suppose you could look
at
https://en.wikipedia.org/w/api.php?action=parse=941491141=2=text|links
(taking the oldid as the revid from the other query) and either try and
parse the html, or just assume if there is only one main namespace link,
that that is the right one.

Also keep in mind, really old revisions won't have those tags.

--
Brian
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki API pageview issue

2020-02-23 Thread bawolff
As an aside, this may be a case where generators in the api are useful -
e.g.
https://en.wikipedia.org/w/api.php?action=query=redirects=2019%E2%80%9320_coronavirus_outbreak=pageviews=pageviews=60
(Note: does not include the actual non-redirect article in the results, and
you have to pay close attention to the continue parameters)
https://en.wikipedia.org/w/api.php?action=query=redirects=2019%E2%80%9320_coronavirus_outbreak=pageviews=pageviews=60=max=2

On Mon, Feb 24, 2020 at 4:28 AM bawolff  wrote:

> Hi,
>
> When I tested the api it seemed to work with redirects (e.g.
> https://mediawiki.org/w/api.php?action=query=json=pageviews=MediaWiki%7CMain_Page=pageviews=60=
> Where Main_Page redirects to the page MediaWiki )
>
> > Then we attempted to use the redirects of a page and using the old page
> ids to grab the pageview data
>
> Just to be clear, when a page is moved, it keeps its page_id. So redirects
> may have historically had the page_id that the target page has now.
>
> If all else fails, you can look at the big dataset files at
> https://dumps.wikimedia.org/other/analytics/ . They should be available
> (in some form or another) going back to 2007, and I believe they are the
> source of the data that the api and all other tools return.
>
> --
> Brian
>
> On Mon, Feb 24, 2020 at 12:17 AM James Gardner via Wikitech-l <
> wikitech-l@lists.wikimedia.org> wrote:
>
>> Hi all,
>>
>> We are a group of undergraduates working on a project using the MediaWiki
>> API. While working on this project, we ran into a unique issue involving
>> pageviews. When trying to pull pageview data for a particular page, the
>> redirects of a page would not be counted along with the original
>> pageviews.
>> For example, the Hong Kong protests page only has direct views, and not
>> views from previous titles.
>>
>> We attempted to use the wmflabs.org tool, but it only shows data from a
>> certain date. (Example link:
>>
>> https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org=all-access=user=2019-07-01=2020-01-25=2019%E2%80%9320_Hong_Kong_protests|China
>> <https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org=all-access=user=2019-07-01=2020-01-25=2019%E2%80%9320_Hong_Kong_protests%7CChina>
>> <
>> https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org=all-access=user=2019-07-01=2020-01-25=2019%E2%80%9320_Hong_Kong_protests%7CChina
>> >
>> )
>>
>> Then we attempted to use the redirects of a page and using the old page
>> ids
>> to grab the pageview data, but there was no data returned. When we
>> attempted to grab data for a page that we knew would have a long past, but
>> the parameter of "pvipcontinue" did not appear (
>> https://www.mediawiki.org/w/api.php?action=help=query%2Bpageviews
>> ).
>> (Example:
>>
>> https://www.mediawiki.org/wiki/Special:ApiSandbox#action=query=json=pageviews=MediaWiki=pageviews=60=
>> )
>>
>> In the end, we are trying to get an accurate count of view for a certain
>> page no matter the source.
>>
>> Any guidance or assistance is greatly appreciated.
>>
>> Thanks,
>> Jackie, James, Junyi, Kirby
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki API pageview issue

2020-02-23 Thread bawolff
Hi,

When I tested the api it seemed to work with redirects (e.g.
https://mediawiki.org/w/api.php?action=query=json=pageviews=MediaWiki%7CMain_Page=pageviews=60=
Where Main_Page redirects to the page MediaWiki )

> Then we attempted to use the redirects of a page and using the old page
ids to grab the pageview data

Just to be clear, when a page is moved, it keeps its page_id. So redirects
may have historically had the page_id that the target page has now.

If all else fails, you can look at the big dataset files at
https://dumps.wikimedia.org/other/analytics/ . They should be available (in
some form or another) going back to 2007, and I believe they are the source
of the data that the api and all other tools return.

--
Brian

On Mon, Feb 24, 2020 at 12:17 AM James Gardner via Wikitech-l <
wikitech-l@lists.wikimedia.org> wrote:

> Hi all,
>
> We are a group of undergraduates working on a project using the MediaWiki
> API. While working on this project, we ran into a unique issue involving
> pageviews. When trying to pull pageview data for a particular page, the
> redirects of a page would not be counted along with the original pageviews.
> For example, the Hong Kong protests page only has direct views, and not
> views from previous titles.
>
> We attempted to use the wmflabs.org tool, but it only shows data from a
> certain date. (Example link:
>
> https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org=all-access=user=2019-07-01=2020-01-25=2019%E2%80%9320_Hong_Kong_protests|China
> 
> <
> https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org=all-access=user=2019-07-01=2020-01-25=2019%E2%80%9320_Hong_Kong_protests%7CChina
> >
> )
>
> Then we attempted to use the redirects of a page and using the old page ids
> to grab the pageview data, but there was no data returned. When we
> attempted to grab data for a page that we knew would have a long past, but
> the parameter of "pvipcontinue" did not appear (
> https://www.mediawiki.org/w/api.php?action=help=query%2Bpageviews
> ).
> (Example:
>
> https://www.mediawiki.org/wiki/Special:ApiSandbox#action=query=json=pageviews=MediaWiki=pageviews=60=
> )
>
> In the end, we are trying to get an accurate count of view for a certain
> page no matter the source.
>
> Any guidance or assistance is greatly appreciated.
>
> Thanks,
> Jackie, James, Junyi, Kirby
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help Further develop for Spell4Wiki App

2020-02-23 Thread bawolff
If you get an invalid CSRF error, its generally best to just get a new
token and try again.

> 2. Once successfully uploaded audio not reflected to UN-Audio words API
   <
https://ta.wiktionary.org/w/api.php?action=query=json=categorymembers=1=%E0%AE%AA%E0%AE%95%E0%AF%81%E0%AE%AA%E0%AF%8D%E0%AE%AA%E0%AF%81:%E0%AE%A4%E0%AE%AE%E0%AE%BF%E0%AE%B4%E0%AF%8D-%E0%AE%92%E0%AE%B2%E0%AE%BF%E0%AE%95%E0%AF%8D%E0%AE%95%E0%AF%8B%E0%AE%AA%E0%AF%8D%E0%AE%AA%E0%AF%81%E0%AE%95%E0%AE%B3%E0%AE%BF%E0%AE%B2%E0%AF%8D%E0%AE%B2%E0%AF%88=50=timestamp=desc
>
and
   Word’s Wiktionary page.


Hmm, as far as I can tell, MediaWiki does not seem to be recording the
{{#ifexist:Media:ta-{{PAGENAME}}.ogg|...  as an image link. But it should.
I don't understand why it is not (For reference, the page i was looking at
was
https://ta.wiktionary.org/wiki/%E0%AE%85%E0%AE%95%E0%AF%8D%E0%AE%95%E0%AE%BF%E0%AE%A9%E0%AE%BF%E0%AE%AE%E0%AF%81%E0%AE%95%E0%AE%9A%E0%AF%8D%E0%AE%9A%E0%AF%82%E0%AE%B0%E0%AE%A3%E0%AE%AE%E0%AF%8D
. Consider
https://ta.wiktionary.org/w/api.php?action=parse=%E0%AE%85%E0%AE%95%E0%AF%8D%E0%AE%95%E0%AE%BF%E0%AE%A9%E0%AE%BF%E0%AE%AE%E0%AF%81%E0%AE%95%E0%AE%9A%E0%AF%8D%E0%AE%9A%E0%AF%82%E0%AE%B0%E0%AE%A3%E0%AE%AE%E0%AF%8D=2=images
. The ogg file should be recorded as a link there due to the template
வார்ப்புரு:ஒலிக்கோப்பு-தமிழ்  ). Anyways, filed
https://phabricator.wikimedia.org/T245965 about it because this is a
mistake on our end. This means mediawiki will not automatically change the
categories on file upload.

As a workaround, I would suggest after upload you do
https://ta.wiktionary.org/w/api.php?action=purge=true=PageUsingTheFile
. That is, do a forcelinksupdate purge on the page that uses the ogg file
that was uploaded.

> Empty append text in edit action is good or bad?

That's called a null edit. Its more or less the same as as using
?action=purge=true from the api.

> Client login or OAuth login - Which one is better?

If you're operating a website, than oauth (including oauth 1) is the clear
better option. If you have an app that lives on the client's machine, then
the new oAuth 2 is a possibility. OAuth 2 is probably better, but its still
really new, and clientlogin is fine too imo.

> In Which license to release uploaded audio file? Public domain or CC
   by zero or any ? and reason?

You need to ensure you get the consent of your users for whatever option
you use. CC-0 gives away all rights to the file, which does make it very
easy to reuse. This is more a political question, you may want to ask on
commons or tawiktionary as to what is preferred.

--
Brian


On Sun, Feb 23, 2020 at 6:02 AM Manimaran_K 
wrote:

> Hi all,
>
> I am Manimaran(https://manimaran96.WordPress.com), Free Software Activist
> and Android Developer. I developing the app called Spell4Wiki.
>
> *Spell4Wiki - Spell For Wiki*
>
> Spell4Wiki is an mobile application to record and upload audio to wiki
> commons for Wiktionary words.Spell4Wiki also act as dictionary. Words
> meanings are come from Wiktionary.
>
>
> *Useful Links*
>
> Source code : https://github.com/manimaran96/Spell4Wiki
>
> APK :
>
> https://github.com/manimaran96/Spell4Wiki/releases/download/devapp_v5/spell4wiki_v5.apk
>
> Workflow :
> https://manimaran96.files.wordpress.com/2020/02/spell4wiki_workflow-1.png
>
> Blog :
>
>
> https://manimaran96.wordpress.com/2019/01/06/spell4wiki-mobile-app-to-record-upload-audio-for-wiktionary/
> <
> https://manimaran96.wordpress.com/2020/02/21/spell4wiki-mobile-app-to-record-upload-audio-to-wiki-commons-for-wiktionary-words-part-2/
> >
>
>
> https://manimaran96.wordpress.com/2020/02/21/spell4wiki-mobile-app-to-record-upload-audio-to-wiki-commons-for-wiktionary-words-part-2/
>
>
> While developing I struggled in some places(listed below). If anyone have
> idea about this please help me...
>
> *Problems*
>
>1. While uploading audio some times I got a message “Invalid CSRF”.
>2. Once successfully uploaded audio not reflected to UN-Audio words API
><
> https://ta.wiktionary.org/w/api.php?action=query=json=categorymembers=1=%E0%AE%AA%E0%AE%95%E0%AF%81%E0%AE%AA%E0%AF%8D%E0%AE%AA%E0%AF%81:%E0%AE%A4%E0%AE%AE%E0%AE%BF%E0%AE%B4%E0%AF%8D-%E0%AE%92%E0%AE%B2%E0%AE%BF%E0%AE%95%E0%AF%8D%E0%AE%95%E0%AF%8B%E0%AE%AA%E0%AF%8D%E0%AE%AA%E0%AF%81%E0%AE%95%E0%AE%B3%E0%AE%BF%E0%AE%B2%E0%AF%8D%E0%AE%B2%E0%AF%88=50=timestamp=desc
> >
> and
>Word’s Wiktionary page.
>
> Problem : 1Reason I found
>
>1. I think this comes may be session time out or expire or wrong CSRF
>token.
>2. Once app close session expired
>
> What I tried
>
>1. I am saving first CSRF token after login. Then after every edit
>request use same CSRF token -> Not worked.
>2. Every time getting new CSRF token for every edit request -> Not
>worked.
>3. I managed the cookies using android-lib
> -> Not worked.
>
> Questions
>
>1. How to solve this?
>2. Possible to extend login expire time?
>3. Which 

Re: [Wikitech-l] A monthly purge

2020-02-20 Thread bawolff
Accidentally replied directly instead of to list like i meant to

On Thu, Feb 20, 2020 at 8:15 AM bawolff  wrote:

> Some back of the napkin math
>
> If it takes 0.5 seconds to parse a page on average, it would take 289 days
> to refresh all the pages on wikipedia (Assuming we aren't parallelizing the
> task). It definitely seems like a non-trivial amount of computer work.
>
> See also discussion at https://phabricator.wikimedia.org/T157670
> (Basically the proposal is, instead of trying to purge everything in
> alphabetical order, just start purging things that haven't been purged in
> like a year or something. Discussion is fairly old, I don't think anyone is
> working on it anymore).
>
> --
> bawolff
>
> On Thu, Feb 20, 2020 at 7:50 AM Amir E. Aharoni <
> amir.ahar...@mail.huji.ac.il> wrote:
>
>>
>> ‫בתאריך יום ה׳, 20 בפבר׳ 2020 ב-9:26 מאת ‪bawolff‬‏ <‪
>> bawolff...@gmail.com‬‏>:‬
>>
>>> Pretty sure the answer is no (Although i don't know for a fact).
>>>
>>> However, parser cache only lasts for 30 days. So pages will get parsed at
>>> least once every 30 days (if viewed). However that's separate from links
>>> update (aka categories, linter, etc).
>>>
>>> I suspect that doing a linksupdate of every article once a month would
>>> take
>>> more than a month.
>>>
>>
>> If it's true, and it may well be, is it conceivable that this will be
>> some kind of a continuous process?
>>
>>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A monthly purge

2020-02-19 Thread bawolff
Pretty sure the answer is no (Although i don't know for a fact).

However, parser cache only lasts for 30 days. So pages will get parsed at
least once every 30 days (if viewed). However that's separate from links
update (aka categories, linter, etc).

I suspect that doing a linksupdate of every article once a month would take
more than a month.

--
Brian

‪On Tue, Feb 18, 2020 at 1:14 PM ‫יגאל חיטרון‬‎ 
wrote:‬

> Hello. Could you tell me, please, if there is a monthly purge of all pages,
> or at least all articles, in all wikis? I remember there were plans about
> this, to aid the Linter, but I don't know what happens now. I consider to
> suggest a new feature to our Village Pump, but it it can be turned on only
> with a purging of all articles. Thank you.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Test Install of MediaWIki for Experimentation

2019-11-21 Thread bawolff
There is also https://en.wikipedia.beta.wmflabs.org/wiki/Main_Page &
https://commons.wikimedia.beta.wmflabs.org/wiki/Main_Page
test.wikipedia.org is closer to the main site (running basically the
version of MW used on wikipedia, and all the user accounts are integrated),
and should not be used for any sort of testing that could be "disruptive".
the test wikis under *.beta.wmflabs.org are more experimental, and runs the
most latest version of mediawiki. Generally I'd recommend testing on the
beta site since it is using the absolute newest version of MediaWiki.

--
Brian

On Thu, Nov 21, 2019 at 9:14 PM Zoran Dori  wrote:

> Hi,
> there is https://test.wikipedia.org https://test2.wikipedia.org
>
> If you want to install it on your own, see
> https://www.mediawiki.org/wiki/Manual:Installation_guide
>
> Best regards,
> Zoran.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The difference between fileexists-no-change and backend-fail-alreadyexists for action=upload

2019-11-18 Thread bawolff
So fileexists-no-change happens when your uploading a new version of a
file, MediaWiki calculates the sha1 sum, and notices that the img_sha1
field in the db for the current version of the file is the same as what was
just calculated for the new file. Which probably (although not necessarily
now that sha1 is broken) means the file you're uploading is the same as the
one already there.

backend-fail-alreadyexists basically means that MediaWiki was about to
write a file somewhere, but there already was a file at that place, and the
overwrite/overwriteSame flags were not set, so there should not have
already been a file in that place (The file already there, may or may not
be the same as the new file). Normally I would assume that something more
frontend-y would give an error before this error would be encountered.
Perhaps it might be encountered during a race condition. This is mostly
speculation as I'm not super familiar with the FileBackend code.

--
bawolff

On Sat, Nov 16, 2019 at 6:51 AM Chen Xinyan  wrote:

> Hi there,
>
> Hope I've found the correct list for asking this question.
>
> I was setting up a CI test case for my MediaWiki Client Library to run a
> bot that will upload a file to https://en.wikipedia.beta.wmflabs.org in
> chunked stash mode. For most of the time, when I perform the final upload
> with filekey parameter, I will receive a fileexists-no-change error from
> the MW API server. This is expected, as there is already a same file with
> the same title on the site. However, today I received
> backend-fail-alreadyexists. The error message looks like
>
> backend-fail-alreadyexists: The file
> "mwstore://local-swift-eqiad/local-public/archive/9/95/20191116051316!Test_image.jpg"
> already exists.
>
> Does this error usually occurs when user tries to upload the same file?
> What's the difference between fileexists-no-change and
> backend-fail-alreadyexists?
>
> Thanks,
> Xinyan
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia production excellence (September 2019)

2019-10-24 Thread bawolff
> There were five recorded incidents last month, equal to the median for
this
and last year. – Explore this data at https://codepen.io/Krinkle/full/wbYMZK

Can't help but feel something is missing here, around the 7th...

--
Brian

On Thu, Oct 24, 2019 at 11:32 PM Krinkle  wrote:

>  Read on Phabricator at
> https://phabricator.wikimedia.org/phame/post/view/173/
> ---
>
> How’d we do in our strive for operational excellence last month? Read on to
> find out!
>
> ##  Month in numbers
>
> * 5 documented incidents. [1]
> * 22 new errors reported. [2]
> * 31 error reports closed. [3]
> * 213 currently open Wikimedia-prod-error reports in total. [4]
>
> There were five recorded incidents last month, equal to the median for this
> and last year. – Explore this data at
> https://codepen.io/Krinkle/full/wbYMZK
>
> To read more about these incidents, their investigations, and pending
> actionables; check
> https://wikitech.wikimedia.org/wiki/Incident_documentation#2019
>
> ## *️⃣ A Tale of Three Great Upgrades
>
> This month saw three major upgrades across the MediaWiki stack.
>
> *Migrate from HHVM to PHP 7.2*
> The client-side switch to toggle between HHVM and PHP 7.2 saw its final
> push — from the 50% it was at previously, to 100% of page view sessions on
> 17 September. The switch further solidified on 24 September when static
> MediaWiki traffic followed suit (e.g. API and ResourceLoader). Thanks Effie
> and Giuseppe for the final push. – More details at
> https://phabricator.wikimedia.org/T219150 and
> https://phabricator.wikimedia.org/T176370.
>
> *Drop support for IE6 and IE7*
> The RFC to discontinue basic compatibility for the IE6 and IE7 browsers
> entered Last Call on 18 September. It was approved on 2 Oct (T232563).
> Thanks to Volker Eckl for leading the sprint to optimise our CSS payloads
> by removing now-redundant style rules for IE6-7 compat. – More at
> https://phabricator.wikimedia.org/T234582.
>
> *Transition from PHPUnit 4/6 to PHPUnit 8*
> With HHVM behind us, our Composer configuration no longer needs to be
> compatible with a “PHP 5.6 like” run-time. Support for the real PHP 5.6 was
> dropped over 2 years ago, and the HHVM engine supports PHP 7 features. But,
> the HHVM engine identifies as “PHP 5.6.999-hhvm”. As such, Composer refused
> to install PHPUnit 6 (which requires PHP 7.0+). Instead, Composer could
> only install PHPUnit 4 under HHVM (as for PHP 5.6). Our unit tests have had
> to remain compatible with both PHPUnit 4 and PHPUnit 6 simultaneously.
>
> Now that we’re fully on PHP 7.2+, our Composer configuration effectively
> drops PHP 5.6, 7.0 and 7.1 all at once. This means that we no longer run
> PHPUnit tests on multiple PHPUnit versions (PHPUnit 6 only). The upgrade to
> PHPUnit 8 (PHP 7.2+) is also unlocked! Thanks Max Sem, Jdforrester and
> Daimona for leading this transition. –
> https://phabricator.wikimedia.org/T192167
>
> ---
>
> ##   Outstanding reports
>
> Take a look at the workboard and look for tasks that might need your help.
> The workboard lists error reports, grouped by the month in which they were
> first observed.
>
> →  https://phabricator.wikimedia.org/tag/wikimedia-production-error/
>
> Or help someone that’s already started with their patch:
> →  https://phabricator.wikimedia.org/maniphest/query/pzVPXPeMfRIz/#R
>
> Breakdown of recent months (past two weeks not included):
>
> * February: 1 report was closed. (1 / 5 reports left).
> * March: 4 / 10 reports left (unchanged).
> * April: 8 / 14 reports left (unchanged). ⚠️
> * May: The last 4 reports were resolved. Done!
> * June: 9 of 11 reports left (unchanged). ⚠️
> * July: 4 reports were fixed! (13 / 18 reports left).
> * August: 6 reports were fixed! (8 / 4 reports left).
> * September: 12 new reports survived the month of September.
>
> ##  Thanks!
>
> Thank you, to everyone else who helped by reporting, investigating, or
> resolving problems in Wikimedia production. Thanks!
>
> Until next time,
>
> – Timo Tijhof
>
> ---
>
> Footnotes:
>
> [1] Incidents. –
>
> https://wikitech.wikimedia.org/wiki/Special:PrefixIndex?prefix=Incident+documentation%2F201909=0=1=1
>
> [2] Tasks created. –
> https://phabricator.wikimedia.org/maniphest/query/XicVcsN1XkVH/#R
>
> [3] Tasks closed. –
> https://phabricator.wikimedia.org/maniphest/query/SXjsllmYHwAO/#R
>
> [4] Open tasks. –
> https://phabricator.wikimedia.org/maniphest/query/47MGY8BUDvRD/#R
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] URL parameter fetching (mediawiki)

2019-10-01 Thread bawolff
MediaWiki coding conventions is to use the WebRequest object.

In the context of a SpecialPage subclass, you would probably do
$this->getRequest()->getVal( 'reason' );   [Note: This combines both POST
and GET values]

See
https://doc.wikimedia.org/mediawiki-core/master/php/classWebRequest.html#a3eecdf9b5d20122630bf5cc9c8506084
for more details.

--
Brian

On Tue, Oct 1, 2019 at 7:51 PM John Shepherd  wrote:

> Is there a “special” way in mediawiki to get parameters passed by URL or
> do you just use the conventional PHP way ($_GET)? I am trying to check if
> “Special:CreateAccount?reason=“ is set/has a value (and what that value is).
>
> Thank you for your assistance, trying to be convention compliant.
> -TheSandDoctor
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Patchsets by new Gerrit contributors waiting for code review and/or merge

2019-09-10 Thread bawolff
Just FYI, both page forms and Cargo are maintained by Yaron.

--
Brian

On Tue, Sep 10, 2019 at 3:20 AM Andre Klapper 
wrote:

> CR0: Please review and provide guidance if you are familiar with the
> code, and decide (CR±1 or CR±2):
>
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Cargo/+/480039/
> ** Added new feature: order virtual fields by the order they inserted to
> list on cargo store
> ** 2018-December-17
> ** Maintainers/Stewards: ???
>
> *
> https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/UnblockMe/+/508045/
> ** Edit Project Config
> *
> https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/UnblockMe/+/508047/
> ** Adding files
> ** Author is also extension author; author might need to get aware of
>
> https://www.mediawiki.org/wiki/Gerrit/Privilege_policy#Requesting_Gerrit_privileges
> or what's the policy on self-merging these days?
>
> * https://gerrit.wikimedia.org/r/#/c/pywikibot/core/+/508092/
> ** remove dewiki archive template from edit restrictions
> ** 2019-May-05
> ** Maintainers/Stewards: ??? (not sure where Pywikibot lists that)
>
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/507821/
> ** Modify "SiteExporter.php" and "SiteImporter.php" to handle
> "LanguageCode" property as well
> ** 2019-Jul-14
> ** Maintainers/Stewards: MediaWiki Platform Team
>
> * https://gerrit.wikimedia.org/r/#/c/operations/mediawiki-config/+/508130/
> ** Create new http://www.mediawiki.org/xml/sitelist-1.1/ to reference
> sitelist-1.1.xsd
> ** 2019-May-07
> ** Maintainers/Stewards: MediaWiki Platform Team ?
>
> *
> https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/PageForms/+/509629/
> ** Minimize multiple instance forms when using the Header Tabs extension
> ** 2019-May-13
> ** Maintainers/Stewards: ???
>
> *
> https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/PageForms/+/512642/
> ** Fix autocomplete filter function for no \w chars.
> ** 2019-May-27
> ** Maintainers/Stewards: ???
>
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Popups/+/527547/
> ** Add popup support for annotated math
> ** 2019-Aug-05
> ** Maintainers/Stewards: WMF Reading Team
>
> * https://gerrit.wikimedia.org/r/#/c/wikidata/query/rdf/+/532373/
> ** PARALLEL_LOADER
> ** 2019-Aug-29
> ** Maintainers/Stewards: ???
>
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/services/parsoid/+/508165/
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/services/parsoid/+/508176/
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/services/parsoid/+/512628/
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/services/parsoid/+/512632/
> * https://gerrit.wikimedia.org/r/#/c/wikipeg/+/510311/
> * https://gerrit.wikimedia.org/r/#/c/wikipeg/+/511820/
> * https://gerrit.wikimedia.org/r/#/c/wikipeg/+/516455/
> * https://gerrit.wikimedia.org/r/#/c/wikipeg/+/516457/
> *
> https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/ParserFunctions/+/511030/
> * https://gerrit.wikimedia.org/r/#/c/wikipeg/+/510070/
> ** Maintainers/Stewards: Parsing Team
> ** See
> https://lists.wikimedia.org/pipermail/wikitech-l/2019-June/092239.html
>
>
>
> CR+1: Please help make a decision (CR±1, CR±2) on these CR+1 patches:
>
> * https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/429533/
> ** ApiEditPage: Don't swap undo and undoafter parameters
> ** 2019-Jun-21
> ** Maintainers/Stewards: MediaWiki Platform Team
>
>
>
> Read https://www.mediawiki.org/wiki/Gerrit/Code_review#By_project how
> you can get notified of new patches in your code areas of interest.
>
> Thanks in advance for your reviews!
>
>
>
> Of last email's 28 listed patches, 4 got merged, 2 got abandoned, 3 got
> CR-1, 1 got CR+1, 1 got V-1.
> Thanks to Yaron, tosfos, Tgr, bblack, Jforrester, Legoktm, tstarling,
> TheDJ, hashar!
>
>
>
> Maintainers/Stewards data taken from
> https://www.mediawiki.org/wiki/Developers/Maintainers
> CR0  source:
> https://gerrit.wikimedia.org/r/#/q/ownerin:newcomers+status:open+label:Verified%253E%253D0+label:Code-Review%253D0
> CR+1
> 
> source:
> https://gerrit.wikimedia.org/r/#/q/ownerin:newcomers+status:open+label:Verified%253E%253D1+label:Code-Review%253E%253D%252B1+-label:Code-Review%253C%253D0
>
> --
> Andre Klapper (he/him) | Bugwrangler / Developer Advocate
> https://blogs.gnome.org/aklapper/
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] For title normalization, what characters are converted to uppercase ?

2019-08-05 Thread bawolff
Apparently that will change in php7.3, which we will move to eventually but
probably not anytime soon: https://3v4l.org/W7TiC

--
bawolff
On Mon, Aug 5, 2019 at 12:32 PM Nicolas Vervelle 
wrote:

> Last question (I believe) :
> I've implemented something similar as Php72ToUpper in WPCleaner, and it
> seems to work fine for removing false positives.
> I've only one left on frwiki : ⅷ
> <https://fr.wikipedia.org/w/index.php?title=%E2%85%B7=no>.
> My code still converts it to uppercase, but on frwiki there is one page for
> the lowercase letter, and one page for the uppercase letter, so this letter
> is not converted to uppercase by current MediaWiki version.
> Is it missing in Php72ToUpper to prevent it to be converted with PHP 7.2 ?
>
> Nico
>
> On Mon, Aug 5, 2019 at 8:45 AM Nicolas Vervelle 
> wrote:
>
> > Thanks Giuseppe !
> >
> > I've subscribed to T219279 to know when the pages are properly converted,
> > and when I can remove the hack in my code.
> >
> > Nico
> >
> > On Mon, Aug 5, 2019 at 7:03 AM Giuseppe Lavagetto <
> > glavage...@wikimedia.org> wrote:
> >
> >> On Sun, Aug 4, 2019 at 11:34 AM Nicolas Vervelle 
> >> wrote:
> >>
> >> > Thanks Brian,
> >> >
> >> > Great for the link to Php72ToUpper.php !
> >> > I think I understand with it : for example, the first line says 'ƀ' =>
> >> 'ƀ',
> >> > which should mean that this letter shouldn't be converted to uppercase
> >> by
> >> > MW ?
> >> > That's one of the letter I found that wasn't converted to uppercase
> and
> >> > that was generating a false positive in my code : so it's because
> >> specific
> >> > MW code is preventing the conversion :-)
> >> >
> >>
> >> Hi!
> >>
> >> No, that file is a temporary measure during a transition between two
> >> versions of php.
> >>
> >> In HHVM and PHP 5.x, calling mb_toupper("ƀ") would give the erroneous
> >> result "ƀ".
> >>
> >> In PHP 7.x, the result is the correct capitalization.
> >>
> >> The issue is that the titles of wiki articles get normalized, so under
> >> php7
> >> we would have
> >>
> >> ƀar => Ƀar
> >>
> >> which would prevent you from being able to reach the page.
> >>
> >> Once we're done with the transition and we go through the process of
> >> coverting the (several hundred) pages/users that have the wrong title
> >> normalization, we will remove that table, and obtain the correct
> >> behaviour.
> >>
> >> You just need to subscribe https://phabricator.wikimedia.org/T219279
> and
> >> wait for its resolution I think - most unicode horrors are fixed in
> recent
> >> versions of PHP, including the one you were citing.
> >>
> >> Cheers,
> >>
> >> Giuseppe
> >> --
> >> Giuseppe Lavagetto
> >> Principal Site Reliability Engineer, Wikimedia Foundation
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] For title normalization, what characters are converted to uppercase ?

2019-08-03 Thread bawolff
MediaWiki uses php's mb_strtoupper.

I believe this will use normal unicode uppercase algorithm. However this
can vary depending on version of unicode. We are currently in the process
of switching to php7, but for the moment we are still using HHVM's
uppercasing code. There's a list of differences between hhvm and php7.2
uppercasing at
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/Php72ToUpper.php
[All this is probably subject to change]

However, I am at a loss as to why hhvm & php < 5.6 [1] wouldn't map that
character, since the ɽ -> Ɽ mapping has been present since unicode 5
(2006). Guess it was using a really old unicode data or something.

See also  bug T219279 [2]

--
Brian

[1] https://3v4l.org/GHt3b
[2] https://phabricator.wikimedia.org/T219279

On Sat, Aug 3, 2019 at 7:57 AM Nicolas Vervelle  wrote:

> Hello,
>
> On most wikis, MediaWiki is configuration to convert the first letter of a
> title to uppercase, but apparently it's not converting every Unicode
> characters : for example, on frwiki ɽ
>  is a
> different article than Ɽ , even
> if
> the second character is the uppercase version of the first one in Unicode.
>
> So, what characters are actually converted to uppercase by the title
> normalization ?
>
> I need to know this information to stop reporting some false positives in
> WPCleaner .
>
> Thanks, Nico
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Developer account creation

2019-04-23 Thread bawolff
Hi,

I was going to register an account for you, but it looks like there's
already a Luca Mauri  registered in gerrit (from Jan
13, 2019 [1]).

In any case, if all else fails, you can upload a patch via
https://tools.wmflabs.org/gerrit-patch-uploader/ . Its also possible to add
a patch as an attachment (using the git format-patch format) to
phabricator, and ask someone to upload it to gerrit for you.

Sorry for the inconvenience.

--
Brian

[1] https://wikitech.wikimedia.org/wiki/Special:Log/Luca_Mauri

On Mon, Apr 22, 2019 at 11:54 AM Luca Mauri  wrote:

> Dear All,
> I would need to create a developer account for a modification I'd like
> to propose as per https://phabricator.wikimedia.org/T221149
>
> I followed the documentation to this link
> https://wikitech.wikimedia.org/wiki/Special:CreateAccount
> I see the account creation is temporarily unavailable and I read why
> this is happening.
> What it is not clear to me is if there is any forecast for the
> registration to be available again
> And, in the meantime, is there any other way to request an account?
>
> Thanks and have a nice day
>
> Luca
>
> ---
> Questa email è stata esaminata alla ricerca di virus da AVG.
> http://www.avg.com
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] resource loader; 1.32 alpha; 1.32 stable breaking change loading scripts in widgets

2019-04-04 Thread bawolff
I don't know if its best practise to do this, but core seems to do:

(window.RLQ=window.RLQ||[]).push(function () {
mw.loader.using('ext.myextension').then(function(){
console.log('library loaded');}); });

You probably don't have many other options if you are using the Widgets
extension.

--
Brian

On Thu, Apr 4, 2019 at 4:08 PM Bartosz Dziewoński 
wrote:

> Best practice is to use the PHP methods which generate the required
> wrappers. Have a look at ResourceLoader::makeLoaderConditionalScript()
> and ResourceLoader::makeInlineCodeWithModule().
>
> Alternatively, if it's possible, it is ideal to put the initialization
> code into another module and load it with addMobules() etc. as usual,
> instead of inlining the code in the HTML source.
>
> --
> Bartosz Dziewoński
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-19 Thread bawolff
On Tue, Mar 19, 2019 at 3:49 PM John Erling Blad  wrote:

>
>
> The devs is not the primary user group, and they never will be. An
> editor is a primary user, and (s)he has no idea where the letters
> travels or how they are stored. A reader is a primary user, and
> likewise (s)he has no idea how the letters emerge on the screen.

The devs are just one of several in a stakeholder group, and focusing
> solely on whatever ickyness they feel is like building a house by
> starting calling the plumber.
>

Nobody claimed they were. In fact, everyone said the opposite. I think
you're just misunderstanding the definitions of the words being used(?)


>
> > Sales dept usually dont advocate for bug fixing as that doesnt sell
> > products, new features do, so i dont know why you are bringing them up.
> > They also dont usually deal with technical debt in the same way somebody
> > who has never been to your house cant give you effective advice on how to
> > clean it.
>
> A sales dep is in contact with the customer, which is a primary user
> of the product. If you don't like using the sales department, then say
> you have a support desk that don't report bugs. Without anyone
> reporting the bugs the product is dead.
>
> Actually this is the decade old fight over "who owns the product". The
> only solution is to create a real stakeholder group.
>
> > That said, fundamentally you want user priorities (or at least *your*
> > priorities. Its unclear if your priorities reflect the user base at
> large)
> > to be taken into consideration when deciding developer priorities? Well
> > step 1 is to define what you want. The wmf obviously tries to figure out
> > what is important to users, and its pretty obvious in your view they are
> > failing. Saying people are working on the wrong thing without saying what
> > they should work on instead is a self-fulfiling prophecy.
>
> Not going to answer this, it is an implicit blame game
>

Well lets make it explicit - If you want change, but refuse to say what
change (whether that be structural or whether that be specific bugs you
want fixed) then it is 100% your fault that the change doesn't happen.
Complaining people/orgs won't change but not saying how you want people to
change is just a waste of everyone's time.

Developers are people not telepaths.

--
Brian
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-19 Thread bawolff
On Monday, March 18, 2019, John Erling Blad  wrote:

> On Mon, Mar 18, 2019 at 10:52 PM bawolff  wrote:
> >
> > First of all, I want to say that I wholeheartedly agree with everything
> tgr
> > wrote.
> >
> > Regarding Pine's question on technical debt.
> >
> > Technical debt is basically a fancy way of saying something is "icky". It
> > is an inherently subjective notion, and at least for me, how important
> > technical debt is depends a lot on how much my subjective sensibilities
> on
> > what is icky matches whoever is talking about technical debt.
> >
> > So yes, I think everyone agrees icky stuff is bad, but sometimes
> different
> > people have different ideas on what is icky and how much ickiness the
> icky
> > things contain. Furthermore there is a trap one can fall into of only
> > fixing icky stuff, even if its only slightly icky, which is bad as then
> you
> > don't actually accomplish anything else. As with everything else in life,
> > moderation is the best policy (imo).
> >
> > --
> > Brian
>
> To set degree of ickyness you need a stakeholdergroup, which is often
> just the sales department. When you neither have a stakeholder group
> or sales department you tend to end up with ickyness set by the devs,
> and then features win over bugs. Its just the way things are.
>
> I believe the ickyness felt by the editors must be more visible to the
> devs, and the actual impact the devs do on bugs to lower the ickyness
> must be more visible to the editors.
>

Technical debt is by definition "ickyness felt by devs". It is a thing that
can be worked on. It is not the only thing to be worked on, nor should it
be, but it is one aspect of the system to be worked on. If its ignored it
makes it really hard to fix bugs because then devs wont understand how the
software works. If tech debt is worked on at the expense of everything
else, that is bad too (like cleaning your house for a week straight without
stopping to eat-bad outcomes) By definition it is not new features nor is
it ickyness felt by users. It might help with bugs felt by users as often
they are the result of devs misunderstanding what is going on, but that is
a consequence not the thing itself.

Sales dept usually dont advocate for bug fixing as that doesnt sell
products, new features do, so i dont know why you are bringing them up.
They also dont usually deal with technical debt in the same way somebody
who has never been to your house cant give you effective advice on how to
clean it.

That said, fundamentally you want user priorities (or at least *your*
priorities. Its unclear if your priorities reflect the user base at large)
to be taken into consideration when deciding developer priorities? Well
step 1 is to define what you want. The wmf obviously tries to figure out
what is important to users, and its pretty obvious in your view they are
failing. Saying people are working on the wrong thing without saying what
they should work on instead is a self-fulfiling prophecy.

--
Brian
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-18 Thread bawolff
First of all, I want to say that I wholeheartedly agree with everything tgr
wrote.

Regarding Pine's question on technical debt.

Technical debt is basically a fancy way of saying something is "icky". It
is an inherently subjective notion, and at least for me, how important
technical debt is depends a lot on how much my subjective sensibilities on
what is icky matches whoever is talking about technical debt.

So yes, I think everyone agrees icky stuff is bad, but sometimes different
people have different ideas on what is icky and how much ickiness the icky
things contain. Furthermore there is a trap one can fall into of only
fixing icky stuff, even if its only slightly icky, which is bad as then you
don't actually accomplish anything else. As with everything else in life,
moderation is the best policy (imo).

--
Brian

On Mon, Mar 18, 2019 at 8:17 PM Pine W  wrote:

> Hi,
>
> First, I'll respond to Scott's comment that " A secondary issue is that too
> much wiki dev is done by WMF/WMFDE employees (IMO); I don't think the
> current percentages lead to an overall healthy open source community. But
> (again in my view) the first step to nurturing and growing our non-employee
> contributors is to make sure their patches are timely reviewed."
>
> I'll make a distinction between two types of proposals:
>
> a, Offload development work from WMF onto volunteers, and
> b, Grow and support the population of developers..
>
> The first type of proposal is likely to get a cold reception from me, but
> I'm more supportive of the second. I don't know how many non-Wikimedia
> organizations which use MediaWiki software have staff that contribute in
> significant ways to WMF in the forms of software, time, or money, but
> growing the significance of those contributions sounds like a good idea. I
> also like programs such as GSoC and Outreachy, and for WMF providing
> support for volunteer devs who create tools, bots, etc. on their own
> initiative.
>
> Second, returning to the subject of technical debt, my understanding was
> that WMF staff were concerned for years about the accumulation of technical
> debt, but in this thread I get the impression that WMF staff has changed
> their minds. Am I misunderstanding something? If the consensus opinion
> among the staff changed, how and why did that change happen?
>
> Thanks,
>
> Pine
> ( https://meta.wikimedia.org/wiki/User:Pine )
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-09 Thread bawolff
Regarding:
>My proposal is to begin the discussion here: how can we better relay issues
>that are more important to communities than new features? How can we have a
>"community whishlist for bugs"?

Well fundamentally it starts with making a list.

This is basically a lobbying discussion right. People think WMF should do
more of X. Lobbying discussions are more successful the more specific they
are. Having a list of the top 20 worse bugs is something you could convince
people to do something about. Even something like /WMF spends too much time
on new features and not enough time on maintenance/bug fixing/, is
something you could convince people to change, if you for example knew how
much time WMF currently spends on bug fixing, and you have an idea of how
much time you think they should be spending. Even if management doesn't
agree with your proposal, it would at least be specific enough to debate.

When these discussions start from vague places, like there's too many bugs,
is when they go nowhere. Even if WMF stopped everything else it was doing,
and worked solely on bugs, I doubt they would fix every bug in existence.
(We can't all be TeX!), and attempting to do that would be a bad idea.

Change happens when stuff is measurable, and people can work towards a
goal. Failing that, change happens when people can be held accountable.
Objective measures are needed.

--
Brian


On Sat, Mar 9, 2019 at 10:31 PM Strainu  wrote:

> Dan,
>
> Thank you for your response. I appreciate far more someone disagreeing with
> me than someone ignoring me :)
>
> Let me start with a simple question, to put the references to wmf into
> context. You keep talking below about volunteer developers and how they can
> take over any project. While that's true, how many fully-volunteer teams
> are there?  How does that number compare to the number of wmf teams? Am I
> right to assume the ratio is hugely in favor of wmf teams?  Note: teams,
> not developers, since decisions on project management are usually done at
> team level.
>
> Pe sâmbătă, 9 martie 2019, Dan Garry (Deskana)  a
> scris:
>
> > On Sat, 9 Mar 2019 at 11:26, Strainu  wrote:
> >
> > > How many successful commercial projects leave customer issues
> unresolved
> > > for years because they're working on something else now?
> > >
> >
> > Almost all of them, they just keep it secret. Companies pay millions of
> > dollars each year for support packages, even after having paid for
> software
> > in the first place, specifically because otherwise their support issues
> may
> > not be answered in a timely fashion, or even answered at all. I don't
> think
> > comparing us to commercial products makes much sense in this context.
>
>
> In my experience in b2b contracts they don't keep it a secret, they usually
> have SLAs they respect, but ok, let's leave it at that.
>
>
> > > There were a
> > > number of proposals on how to track such issues so that reporters have
> a
> > > clear image of the status of the bugs. Have any of them been tried by
> at
> > > least one of the teams at wmf? If so, is there a way to share the
> results
> > > with other teams? If not, how can we convince the wmf to give them a
> > > chance?
> > >
> >
> > I don't agree with shifting responsibility onto the Wikimedia Foundation.
>
>
> Responsibility for what? Developing and hosting  MediaWiki? Helping
> communities concentrate on creating and attracting content without having
> to work around bugs? I'm sorry, but that's precisely one of the
> responsibilities of the wmf and this is what's discussed here.
>
>
>
> > There's an anti-pattern here: we all have a big mailing list discussion,
> > agree there's a problem, agree that the Foundation should solve the
> > problem, then ask again in a year what they did even though they didn't
> > actually say they'd do anything about it. That's not a healthy dynamic.
>
>
> This is one thing that we agree on: nobody committed on anything. Ever.
> That's why I asked above: what does it take to have someone (anyone) at the
> WMF act upon these discussions?
>
> My role in the Wikimedia tech community is tech ambassador above all else,
> so I'm caught in the middle here: I have to explain new features and
> technical decisions to people who don't care about php, js or server
> performance , but I also feel obligated to relay their requirements, as I
> see them, to the development team. This second process does not happen as
> smoothly as it should.
>
> It's not healthy to ignore discussion after discussion and claim it's a
> community issue. It's not. It's a governance issue and it's growing every
> day.
>
>
>
>
> >
> > The technical space is owned by all of us, so if we, as a technical
> > community, decide this is important to us, then we can look at the
> problem
> > and try to tackle it, and then figure out how the Wikimedia Foundation
> > could catalyse that.
>
>
> The projects belong to the community at large, not just the technical
> subcommunity. They are the ones 

Re: [Wikitech-l] Community-Engineering gaps as defined in configuration

2019-03-09 Thread bawolff
In regards to wgUseRCPatrol - I suspect (but don't know) that originally
that was disabled on enwiki as a performance thing. If it was a performance
concern, that's probably irrelevant at this point.

I generally agree that its good to try an unify config complexity where it
makes sense. But I don't think we should force it where it does not. People
are different and have different needs. Keeping communities happy in the
case of technical disputes is well worth the extra complexity in my mind
(within reason).

Re Risker with loading time: That can depend on the extensions. Some do
slightly increase loading times, but there are many extensions which should
not cause any change in loading time (or at least negligable - i.e. less
than a millisecond) no matter how bad the internet connection is.

--
Brian

On Sat, Mar 9, 2019 at 6:44 PM Risker  wrote:

> I suspect that a major factor in whether or not to deploy many of these
> extensions to every wiki is that, frankly...not every wiki has enough
> active editors for these extensions to make sense or to be helpful.  In
> some ways, the extension may well be unhelpful or could act to distract the
> few active editors from their main focus (content creation, in most cases)
> to activities that do not help them to build their projects.  Anything that
> requires language localization - often involving the use of interface
> administrator permission - is not helpful for projects that have no
> interface admins (or, in some cases, any administrators at all).
>
> It's not a good idea to assume that one configuration fits all.  There's a
> huge difference between the larger projects, with thousands of active users
> and dozens if not hundreds of administrators, and the majority of Wikimedia
> wikis that have fewer than 100 active contributors. Looking only at
> Wikipedia, 165 of 303 Wikipedias have fewer than 100 active (i.e., at least
> one edit a month) editors and fewer than 10 administrators.[1]
>
> It's easy to forget that many of the extensions were designed to assist in
> the work of medium to large Wikipedias; a lot of them aren't all that
> useful for smaller projects, and some of them can significantly add to the
> burden of projects with a small user and admin base.  Remember that every
> extension that's enabled extends the loading time - it may appear to be
> statistically unimportant in much of the Western world, but there are
> cumulative effects that are much more noticeable when contributors are
> participating on slow internet connections with older technology and are
> located a long way away from the servers.  These are real effects that most
> people participating on this list simply never have a reason to notice,
> until we're trying to edit logged-in from remote areas.
>
> Risker/Anne
>
>
>
>
>
> [1] https://meta.wikimedia.org/wiki/List_of_Wikipedias
>
>
>
> On Sat, 9 Mar 2019 at 10:12, Eran Rosenthal  wrote:
>
> > Hi,
> > Wiki communities can ask to override their default configurations
> > (following consensus). The reasons to override may vary:
> >
> >1. *customization* for community to align to community specific policy
> >(example: special namespaces / upload policy/user groups rights
> defined
> > per
> >project and language version)
> >2. *technical dispute* where community and engineering don't agree
> >(example:  VE as single tab for enwiki[1], disabling description
> > tagline in
> >enwiki[2] etc).
> >
> > Technical dispute are problematic - if the product is not good enough
> > engineering and community should ideally come with a solution how to
> > address the issues. We can define a plan to fix the product (disable till
> > task X is fixed), enable/disable feature in the USER level if it is a
> > matter of choice (not the site level) etc.
> > Anyway we should try to avoid letting specific communities override
> > defaults for long term if there is no community specific reason to
> override
> > the configuration. This create a jungle of configurations, inconsistent
> UI
> > across languages and more complex maintenance etc.
> >
> > This is a call for action for the wiki communities and engineering:
> >
> >- Engineering - consider to align all wikis to similar configuration
> if
> >it isn't community customization but "technical dispute"
> >- Communities - consider whether these configurations are old and can
> be
> >removed
> >
> > Examples of issues found in wmf-config:
> >
> >- VisualEditor tabs (wmgVisualEditorUseSingleEditTab 60 wikis
> "Deploying
> >slowly with community advanced notice"?
> >wmgVisualEditorSingleEditTabSecondaryEditor - enwiki overrides the
> > default
> >editor)
> >   - and more generally enabling VE deployments:
> >   https://www.mediawiki.org/wiki/VisualEditor/Rollouts
> >   - Patroling
> >   - wgUseRCPatrol - disabled by default but ~80 wikis enable it.
> Should
> >   be enabled for all wikis? (if not - what is missing from getting 

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-08 Thread bawolff
"tracked" does not mean someone is planning to work on it. This could be
for a lot of reasons, maybe the bug is unclear, maybe its not obvious what
a good way to fix is, maybe nobody cares (This sounds harsh, but the simple
truth is, different things have different people caring about them, and
some parts just don't have anyone).

This is not really a paid vs unpaid thing. Volunteer projects have a big
backlog of bugs. Commercial projects also have a backlog or things they
just don't intend to fix (although usually big commercial projects keep the
list of bug reports secret).

I really think its no different from Wikipedia.
https://en.wikipedia.org/wiki/Wikipedia:Backlog isn't getting any smaller.
That's just the natural way of things. Its a bit easier to yell {{sofixit}}
on wiki than it is to yell it about technical tasks, as technical stuff by
their very nature require specialized knowledge (Although i would argue
that lots of tasks on wiki also require specialized knowledge). At the end
of the day, to get a task fixed, someone who knows how to do it (Or is
willing to learn how to do it) needs to be interested in doing it.

--
Brian

On Fri, Mar 8, 2019 at 12:31 PM John Erling Blad  wrote:

> The backlog for bugs are pretty large (that is an understatement),
> even for bugs with know fixes and available patches. Is there any real
> plan to start fixing them? Shall I keep telling the community the bugs
> are "tracked"?
>
> /jeblad
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Read access to patrolled flag in wikimedia API

2019-03-05 Thread bawolff
Are you sure that patrol status is shown as colour coding on history pages?
I'm pretty sure its not.

If you mean kind of the dim yellow colour (like in
https://en.wikipedia.org/w/index.php?title=List_of_programs_broadcast_by_Adult_Swim=history
for the moment, but that will likely change soon), that means a pending
change, which is a different system from patrolling.

Note, on enwikipedia (but not other projects) RC patrolling is disabled,
and only new page patrol is enabled (so only the first revision can have a
patrol status).

--
Brian

On Tue, Mar 5, 2019 at 4:13 PM Сибирев Кирилл 
wrote:

> Hi, we are using wikimedia http api for getting pages recent changes [1].
> We'd like to be able to distinguish patrolled and unpatrolled revisions and
> this feature is supported according to docs, but we still can't use it
> because of access permissions. For example if i making requests like [2] or
> [3] i am getting {"code": "permissiondenied", "info": "You need the
> \"patrol\" or \"patrolmarks\" right to request the patrolled flag."} error.
>
> This API behaviour looks inconsistent to me, because anyone can see
> patrolled/unpatrolled colored markup at wikipedia revision history web
> pages. I think patrol right should be checked only at write (ones that mark
> revisions patrolled or not) API requests and not for read requests.
>
> Is this behaviour really inconsistent and implemented that way due to
> technical restrictions or am i missing something? Can it be changed, so we
> can get patrolling information for revisions or maybe there are some
> workarounds exist?
>
>
> [1] https://www.mediawiki.org/wiki/API:RecentChanges
> [2]
> https://en.wikipedia.org/w/api.php?action=query=recentchanges=title|patrolled=3
> [3]
> https://en.wikipedia.org/w/api.php?action=query=recentchanges=title=patrolled=3
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A potential new way to deal with spambots

2019-02-13 Thread bawolff
I actually meant a different type of maintenance.

Maintaining the encyclopedia (and other wiki projects) is of course an
activity that needs software support.

But software is also something that needs maintenance. Technology,
standards, circumstances change over time. Software left alone will
"bitrot" over time. A long term technical strategy to do anything needs to
account for that, plan for that. One off feature development does not.
Democratically directed one-off feature development accounts for that even
less.

In response to Johnathan:
So lets say that ORES/magic AI detects something is a bot. Then what?
That's a small part of the picture. In fact you don't even need AI to do
this, plenty of the vandal bots have generic programming language
user-agents (AI could of course be useful for long-tail here, but there's
much simpler stuff to start off with). Do we expose this to abusefilter
somehow? Do we add a tag to mark it in RC/watchlist? Do we block it? Do we
rate limit it? What amount of false positives are acceptable? What is the
UI for all this? To what extent is this hard coded, and to what extent do
communities control the feature? etc

We don't need products to detect bots. Making products to detect bots is
easy. We need product managers to come up with socio-technical systems that
make sense in our special context.

--
Brian

On Tue, Feb 12, 2019 at 8:36 PM Pine W  wrote:

> Since we're discussing how the Tech Wishlist works then I will comment on a
> few points specifically regarding that wishlist.
>
> 1. A gentle correction: the recommendations are ranked by vote, not by
> consensus. This has pros and cons.
>
> 2a. If memory serves me correctly, the wishlist process was designed by WMF
> rather than designed by community consensus. I may be wrong about this, but
> in my search of historical records I have not found evidence to the
> contrary. I think that redesigning the process would be worth considering,
> and I hope that a redesign would help to account for the types of needs
> that bawolff described in his second paragraph.
>
> 2b.. I think that it's an overstatement to say that "nobody ever votes for
> maintenance until its way too late and everything is about to explode". I
> think that many non-WMF people are aware of our backlogs, the endless
> requests for help and conflict resolution, and the many challenges of
> maintaining what we have with the current population of skilled and good
> faith non-WMF people. However, I have the impression that there is a common
> *tendency* among humans in general to chase shiny new features instead of
> doing mostly thankless work, and I agree that the tech wishlist is unlikely
> even in a redesigned form to be well suited for long term planning. I think
> that WMF's strategy process may be a better way to plan for the long term,
> including for maintenance activities that are mostly thankless and do not
> necessarily correlate with increasing someone's personal power, making
> their resume look better, or having fun. Fortunately the volunteer
> mentality of many non-WMF people means that we do have people who are
> willing to do mostly thankless, mundane, and/or stressful work, and I think
> that some of us feel that our work is important for maintaining the
> encyclopedia even when we do not enjoy it, but we have a finite supply of
> time from such people.
>
> Pine
> ( https://meta.wikimedia.org/wiki/User:Pine )
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A potential new way to deal with spambots

2019-02-11 Thread bawolff
The tech wishlist is awesome, and they do a lot of great work.

However, I don't think this type of democratic-driven development is
appropriate for all things. If it were we would just get rid of all the
other dev teams and just have a wish-list. In this case what is needed is
an anti-abuse strategy, not just a one-off feature. This involves
development of many features over the long term, maintenance, long-term
product management, integration into the whole etc. Even in real life,
nobody ever votes for maintenance until its way too late and everything is
about to explode. Not to mention the product research aspect of it -
wishlist inherently encourages people to think inside the box as it is
basically asking the question of what's wrong with the current box. You
can't vote for something if you don't realize its a choice.

As other's have mentioned, majority rules is also sometimes not the
appropriate way to choose what to do. Sometimes there are things that only
affect a minority, but its an important minority. Sometimes there are
things that affect everyone slightly and they win over things that affect a
small class significantly (Of course both types of things are important).
Sometimes there are things that are long term important but short term
unimportant [Not saying that people can't vote rationally for long term
tasks, just that the wishlist is mostly developed around the idea of short
term tasks, short enough you can do about 10 of them in a year].

--
Brian

On Mon, Feb 11, 2019 at 5:18 PM Jonathan Morgan 
wrote:

> This may be naive, but... isn't the wishlist filling this need? And if not
> through a consensus-driven method like the wishlist, how should a WMF team
> prioritize which power user tools it needs to focus on?
>
> Or is just a matter of "Yes, wishlist, but more of it"?
>
> - Jonathan
>
> On Mon, Feb 11, 2019 at 2:34 AM bawolff  wrote:
>
>> Sure its certainly a front we can do better on.
>>
>> I don't think Kasada is a product that's appropriate at this time.
>> Ignoring
>> the ideological aspect of it being non-free software, there's a lot of
>> easy
>> things we could and should try first.
>>
>> However, I'd caution against viewing this as purely a technical problem.
>> Wikimedia is not like other websites - we have allowable bots. For many
>> commercial websites, the only good bot is a dead bot. Wikimedia has many
>> good bots. On enwiki usually they have to be approved, I don't think
>> that's
>> true on all wikis. We also consider it perfectly ok to do limited testing
>> of bots before it is approved. We also encourage the creation of
>> alternative "clients", which from a server perspective looks like a bot.
>> Unlike other websites where anything non-human is evil, here we need to
>> ensure our blocking corresponds to social norms of the community. This may
>> sound not that hard, but I think it complicates botblocking more than is
>> obvious at first glance.
>>
>> Second, this sort of thing is something that tends to far through the
>> cracks at WMF. AFAIK the last time there was a team responsible for admin
>> tools & anti-abuse was 2013 (
>> https://www.mediawiki.org/wiki/Admin_tools_development). I believe
>> (correct
>> me if I'm wrong) that anti-harrasment team is all about human harassment
>> and not anti-abuse in this sense. Security is adjacent to this problem,
>> but
>> traditionally has not considered this problem in scope. Even core tools
>> like checkuser have been largely ignored by the foundation for many many
>> years.
>>
>> I guess this is a long winded way of saying - I think there should be a
>> team responsible for this sort of stuff at WMF, but there isn't one. I
>> think there's a lot of rather easy things we can try (Off the top of my
>> head: Better captchas. More adaptive rate limits that adjust based on how
>> evilish you look, etc), but they definitely require close involvement with
>> the community to ensure that we do the actual right thing.
>>
>> --
>> Brian
>> (p.s. Consider this a volunteer hat email)
>>
>> On Sun, Feb 10, 2019 at 6:06 AM Pine W  wrote:
>>
>> > To clarify the types of unwelcome bots that we have, here are the ones
>> that
>> > I think are most common:
>> >
>> > 1) Spambots
>> >
>> > 2) Vandalbots
>> >
>> > 3) Unauthorized bots which may be intended to act in good faith but
>> which
>> > may cause problems that could probably have been identified during
>> standard
>> > testing in Wikimedia communities which have a relatively well developed
>> bot
>> > approval process. (See
&

Re: [Wikitech-l] A potential new way to deal with spambots

2019-02-11 Thread bawolff
Sure its certainly a front we can do better on.

I don't think Kasada is a product that's appropriate at this time. Ignoring
the ideological aspect of it being non-free software, there's a lot of easy
things we could and should try first.

However, I'd caution against viewing this as purely a technical problem.
Wikimedia is not like other websites - we have allowable bots. For many
commercial websites, the only good bot is a dead bot. Wikimedia has many
good bots. On enwiki usually they have to be approved, I don't think that's
true on all wikis. We also consider it perfectly ok to do limited testing
of bots before it is approved. We also encourage the creation of
alternative "clients", which from a server perspective looks like a bot.
Unlike other websites where anything non-human is evil, here we need to
ensure our blocking corresponds to social norms of the community. This may
sound not that hard, but I think it complicates botblocking more than is
obvious at first glance.

Second, this sort of thing is something that tends to far through the
cracks at WMF. AFAIK the last time there was a team responsible for admin
tools & anti-abuse was 2013 (
https://www.mediawiki.org/wiki/Admin_tools_development). I believe (correct
me if I'm wrong) that anti-harrasment team is all about human harassment
and not anti-abuse in this sense. Security is adjacent to this problem, but
traditionally has not considered this problem in scope. Even core tools
like checkuser have been largely ignored by the foundation for many many
years.

I guess this is a long winded way of saying - I think there should be a
team responsible for this sort of stuff at WMF, but there isn't one. I
think there's a lot of rather easy things we can try (Off the top of my
head: Better captchas. More adaptive rate limits that adjust based on how
evilish you look, etc), but they definitely require close involvement with
the community to ensure that we do the actual right thing.

--
Brian
(p.s. Consider this a volunteer hat email)

On Sun, Feb 10, 2019 at 6:06 AM Pine W  wrote:

> To clarify the types of unwelcome bots that we have, here are the ones that
> I think are most common:
>
> 1) Spambots
>
> 2) Vandalbots
>
> 3) Unauthorized bots which may be intended to act in good faith but which
> may cause problems that could probably have been identified during standard
> testing in Wikimedia communities which have a relatively well developed bot
> approval process. (See
> https://en.wikipedia.org/wiki/Wikipedia:Bots/Requests_for_approval.)
>
> Maybe unwelcome bots are not a priority for WMF at the moment, in which
> case I could add this subject into a backlog. I am sorry if I sound grumpy
> at WMF regarding this subject; this is a problem but I know that there are
> millions of problems and I don't expect a different project to be dropped
> in order to address this one.
>
> While it is a rough analogy, I think that this movie clip helps to
> illustrate a problem of bad bots. Although the clip is amusing, I am not
> amused by unwelcome bots causing problems on ENWP or anywhere else in the
> Wikiverse. https://www.youtube.com/watch?v=lokKpSrNqDA
>
> Thanks,
>
> Pine
> ( https://meta.wikimedia.org/wiki/User:Pine )
>
>
>
> On Sat, Feb 9, 2019, 1:40 PM Pine W 
> > OK. Yesterday I was looking with a few other ENWP people at what I think
> > was a series of edits by either a vandal bot or an inadequately designed
> > and unapproved good faith bot. I read that it made approximately 500
> edits
> > before someone who knew enough about ENWP saw what was happening and did
> > something about it. I don't know how many problematic bots we have, in
> > addition to vandal bots, but I am confident that they drain a nontrivial
> > amount of time from stewards, admins, and patrollers.
> >
> > I don't know how much of a priority WMF places on detecting and stopping
> > unwelcome bots, but I think that the question of how to decrease the
> > numbers and effectiveness of unwelcome bots would be a good topic for WMF
> > to research.
> >
> > Pine
> > ( https://meta.wikimedia.org/wiki/User:Pine )
> >
> >
> > On Sat, Feb 9, 2019 at 9:24 PM Gergo Tisza  wrote:
> >
> >> On Fri, Feb 8, 2019 at 6:20 PM Pine W  wrote:
> >>
> >> > I don't know how practical it would be to implement an approach like
> >> this
> >> > in the Wikiverse, and whether licensing proprietary technology would
> be
> >> > required.
> >> >
> >>
> >> They are talking about Polyform [1], a reverse proxy that filters
> traffic
> >> with a combination of browser fingerprinting, behavior analysis and
> proof
> >> of work.
> >> Proof of work is not really useful unless you have huge levels of bot
> >> traffic from a single bot operator (also it means locking out users with
> >> no
> >> Javascript); browser and behavior analysis very likely cannot be
> >> outsourced
> >> to a third party for privacy reasons. Maybe we could do it ourselves
> >> (although it would still bring up interesting questions privacy-wise)
> 

Re: [Wikitech-l] Query

2019-01-23 Thread bawolff
Its totally fine for the accounts to be different. (As long as you aren't
intentionally using multiple accounts to confuse people, of course)

At worse, it might be a tad confusing, but that's totally ok. Putting a
note on your user page listing all your accounts is a good idea to prevent
confusion.

If you log into phab using your "wiki account" (e.g. via OAuth). You can
still link your gerrit account, by going to your preferences in
phabricator. This will make both accounts associated with your phabricator
account.

--
Brian
On Wed, Jan 23, 2019 at 4:53 PM K. Kaushik Reddy 
wrote:

> Hi everyone,
>
> It's been a long time, I'm back to work after a long holiday. I have one
> question to ask. I have different usernames for Gerrit account  and
> Phabricator account . Will that create any trouble while contributing the
> code?
>
> Best,
> K. Kaushik Reddy.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Gerrit now automatically adds reviewers

2019-01-18 Thread bawolff
Umm, No.

--
Bawolff

On Fri, Jan 18, 2019 at 10:13 PM Pine W  wrote:

> I'm glad that this problematic change to communications was reverted.
>
> I would like to suggest that this is the type of change that, when being
> planned, should get a design review from a third party before coding
> starts, should go through at least one RFC before coding starts, and be
> widely communicated before coding starts and again a week or two before
> deployment. Involving TechCom might also be appropriate. It appears that
> none of those happened here. In terms of process this situation looks to me
> like it's inexcusable.
>
> In the English Wikipedia community, doing something like this would have a
> reasonable likelihood of costing an administrator their tools, and I hope
> that a similar degree of accountability is enforced in the engineering
> community. In particular, I expect engineering supervisors to follow
> established technical processes for changes that impact others' workflows,
> and if they decide to skip those processes without a compelling reason
> (such as a site stability problem) then I hope that they will be held
> accountable. Again, from my perspective, the failure to follow process here
> is inexcusable.
>
> Pine
> ( https://meta.wikimedia.org/wiki/User:Pine )
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Book scans from Tuebingen Digital Library to Wikimedia Commons

2018-12-03 Thread bawolff
Have you seen https://commons.wikimedia.org/wiki/Commons:Batch_uploading
? I think the folks at commons are more likely to be able to give you
the help you need than wikitech-l would be.

--
Brian

On Mon, Dec 3, 2018 at 5:22 AM Shiju Alex  wrote:
>
> >
> > Google Drive will do OCR on Malayalam, Kannada, and Telugu. Google Vision
> > API (which is usable from a Wikisource gadget
> >  built by Sam Wilson)
> > will do OCR on Tamil. I can't vouch for these being "good", but they do
> > exist.
>
>
> The request in this post is not for creating an OCR for any language
> script; but to migrate certain Public Domain book scans from Tuebingen
> digital library to Wikimedia Commons.
>
> Also there is another task of migrating *already proofread Unicode text* to
> Wikisource. But to take up the Unicode migration first the scans need to be
> in Commons.
>
> I am making this request only because of the huge amount of pages that we
> need to handle. If it was just few hundreds of pages volunteers would have
> manually done it.
>
>
> Shiju
>
>
> On Mon, Dec 3, 2018 at 10:01 AM Ryan Kaldari  wrote:
>
> > >There is no good OCR for languages like Malayalam.
> >
> > Google Drive will do OCR on Malayalam, Kannada, and Telugu. Google Vision
> > API (which is usable from a Wikisource gadget
> >  built by Sam Wilson)
> > will do OCR on Tamil. I can't vouch for these being "good", but they do
> > exist.
> >
> > On Sun, Dec 2, 2018 at 2:54 AM Shiju Alex 
> > wrote:
> >
> > > Hi
> > >
> > > Here are the answers
> > >
> > > What does "converted to Unicode" mean? Converted from what exactly? Do
> > > > you maybe mean "converted via OCR (Optical character recognition) from
> > > > images in file formats (JPG, PNG, images in a PDF) which don't allow
> > > > marking text to a file format which allows marking text in those files?
> > >
> > >
> > > There is no good OCR for languages like Malayalam. So each scanned image
> > is
> > > manually typed and proofread  For example, See the 7th page of this book
> > > .
> > > You
> > > can see the scan image on the right and the transcribed text for that
> > page
> > > on the left in the *Transcript *tab.  This is done for 136 books, and
> > total
> > > pages on these books are close to 25,700 pages.
> > >
> > > What would you want the script to do exactly? Pull the files from the
> > > > Tuebingen Digital Library and then mass-upload these files to Commons?
> > >
> > >
> > > Yes, this is what is required. Unicode migration we will handle
> > separately.
> > >
> > >
> > > Shiju Alex
> > >
> > >
> > >
> > >
> > >
> > > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Sun, Dec 2, 2018 at 4:07 PM Andre Klapper 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Great! Some questions below for better understanding what's wanted:
> > > >
> > > > On Sun, 2018-12-02 at 15:22 +0530, Shiju Alex wrote:
> > > > > Recently Tuebingen University
> > > > >  (with
> > > > > the support from German Research Foundation) ran a project titled
> > > > *Gundert
> > > > > Legacy project* to digitize close to 137,000 pages from *850 public
> > > > domain
> > > > > books*.
> > > > >
> > > > > All these public domain books are in the South Indian languages
> > > > *Malayalam,
> > > > > Kannada, Tamil, Tulu, and Telugu*. In this 293 books are in
> > Malayalam,
> > > > 187
> > > > > in Kannada, 25 in Tamil, 4 in Telugu and Tulu.
> > > > >
> > > > > Also there was  a separate sub-project which was run as part of this
> > > > > project to convert 136 titles in Malayalam to Malayalam Unicode. The
> > > > number
> > > > > of pages that were converted to Unicode is close to *25,700* pages
> > .The
> > > > > Unicode conversion project was ran only for Malayalam. For the other
> > > > > languages it is just the scanning of books
> > > >
> > > > What does "converted to Unicode" mean? Converted from what exactly? Do
> > > > you maybe mean "converted via OCR (Optical character recognition) from
> > > > images in file formats (JPG, PNG, images in a PDF) which don't allow
> > > > marking text to a file format which allows marking text in those files?
> > > >
> > > > > The project is complete now and the results of the project is
> > available
> > > > in
> > > > > the Hermman Gundert Portal
> > https://www.gundert-portal.de/?language=en
> > > > which
> > > > > was released on Nov 20. A news report is available here.
> > > > > <
> > > >
> > >
> > https://timesofindia.indiatimes.com/city/kochi/german-scholars-malayalam-mission-all-set-to-get-a-digital-makeover/articleshow/66633108.cms
> > > > >
> > > > >
> > > > > To view the books in each language you can navigate through the
> > various
> > > > > links in the portal. For example, malayalam books are available here:
> > > > > 

Re: [Wikitech-l] non-obvious uses of in your language

2018-10-07 Thread bawolff
Alas, no longer valid in XML or HTML5. (Although HTML5 will still
parse it as an empty comment, but with a  "incorrectly-opened-comment"
error.

--
Brian


On Sat, Oct 6, 2018 at 6:57 AM Chad  wrote:
>
> Found it :)
>
> https://www.w3.org/MarkUp/SGML/sgml-lex/sgml-lex
>
> Search for "empty comment declaration" :)
>
> -Chad
>
> On Fri, Oct 5, 2018, 11:50 PM Chad  wrote:
>
> > I'm personally a fan of .
> >
> > I came across it years ago--it's a null comment. Can't find the reference
> > at the moment though.
> >
> > -Chad
> >
> > On Thu, Oct 4, 2018, 2:25 PM Daniel Kinzler 
> > wrote:
> >
> >> Am 04.10.2018 um 18:58 schrieb Thiemo Kreuz:
> >> > The syntax "[[Schnee]]reichtum" is quite common in the
> >> > German community. There are not many other ways to achieve the same:
> >> >  or  can be used instead.[1] The later is often the
> >> > better alternative, but an auto-replacement is not possible. For
> >> > example, "[[Bund]]estag" must become "[[Bund]]estag".
> >>
> >> We could introduce new syntax for this, such as  or even .
> >>
> >> Or how about {{}} for "this is a syntactic element, but it does nothing"?
> >> But if
> >> that is mixed in with template expansion, it won't work if it expands to
> >> nothing, since template expansion happens before link parsing, right? For
> >> better
> >> or worse...
> >>
> >> --
> >> Daniel Kinzler
> >> Principal Software Engineer, MediaWiki Platform
> >> Wikimedia Foundation
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> On Oct 5, 2018 11:50 PM, "Chad"  wrote:
>
> I'm personally a fan of .
>
> I came across it years ago--it's a null comment. Can't find the reference
> at the moment though.
>
> -Chad
>
> On Thu, Oct 4, 2018, 2:25 PM Daniel Kinzler  wrote:
>
> > Am 04.10.2018 um 18:58 schrieb Thiemo Kreuz:
> > > The syntax "[[Schnee]]reichtum" is quite common in the
> > > German community. There are not many other ways to achieve the same:
> > >  or  can be used instead.[1] The later is often the
> > > better alternative, but an auto-replacement is not possible. For
> > > example, "[[Bund]]estag" must become "[[Bund]]estag".
> >
> > We could introduce new syntax for this, such as  or even .
> >
> > Or how about {{}} for "this is a syntactic element, but it does nothing"?
> > But if
> > that is mixed in with template expansion, it won't work if it expands to
> > nothing, since template expansion happens before link parsing, right? For
> > better
> > or worse...
> >
> > --
> > Daniel Kinzler
> > Principal Software Engineer, MediaWiki Platform
> > Wikimedia Foundation
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikitech-ambassadors] Translations on hold until further notice

2018-09-27 Thread bawolff
Updates from translatewiki will be put on pause. You can continue to update
translations at translatewiki, but they won't show up on Wikimedia wikis
until the current issues are sorted out.

In order to avoid things getting out of sync with translatewiki, we would
like to ask that you avoid translating things locally unless you really
need to until updates from translate wiki are turned back on.

--
Brian

On Thu, Sep 27, 2018 at 12:49 PM, יגאל חיטרון 
wrote:

> N3 Trizek just said that translatewiki will work for existing messages. I
> read here they will not. What is the right answer? Thank you.
> Igal
>
>
> 2018-09-27 15:47 GMT+03:00 יגאל חיטרון :
>
> > Sure, but *existing* messages are fixed all the time, if you want it or
> > not. And you should choose - fix them locally, and then make it global
> > after the translatewiki will return, or fix the in translatewiki, and it
> > will not work at all.
> > Igal
> >
> >
> > 2018-09-27 14:47 GMT+03:00 :
> >
> >> ‪On Thu, Sep 27, 2018 at 1:43 PM ‫יגאל חיטרון‬‎ 
> >> wrote:‬
> >>
> >> > Hi. If so, I think that the next issue of the Tech news should
> >> recommend to
> >> > fix the existing translations locally, instead of translatewiki.com.
> >> >
> >>
> >> Ermm, isn’t this _exactly_ what we want to avoid (and why the original
> >> message was sent in the first place)? Local translations will not be
> >> updated if neccessary, being stuck basically forever once set (and will
> >> not
> >> be propagated to other wikis, but that is the lesser problem; once
> >> TranslateWiki starts working again, that will be fixed, the local
> >> translations won’t).
> >>
> >> -- [[cs:User:Mormegil | Petr Kadlec]]
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> >
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-15 Thread bawolff
Thank you for your detailed reply. I'm going to respond inline:

On Thu, Aug 16, 2018 at 12:12 AM, Amir Ladsgroup  wrote:
> I write all answers in one place:
>
> Brian:
>> So we are going to magically assume that somehow this block is going to
> change mcbride's behaviour when it took a 100 message email thread before
> he even found out the reason he was blocked (which differs from the implied
> reason in the email which was sent to an email account he usually doesnt
> check)?
> That's actually an argument against making cases public. The temporarily
> bans and incremental steps usually work. We had cases that one warning was
> enough and we never got any more report about the person, we had cases that
> a temporarily ban was enough (and those didn't cause a +100 email thread,
> no one knew about them expect some admins, the reporter and the person who
> got banned) and it worked. I can't disclose much. The reason that took 100
> emails until the user realizes is banned was because of they missed the
> email sent to them. It happens but not 100% of all cases. When the user
> found the email, they forwarded them to a public mailing list and defending
> themselves in public. There are ways to appeal as mentioned in the CoC, why
> that didn't happen? By forwarding such emails to public, they just make the
> tip of the iceberg public and everyone thinks "Someone got banned just for
> saying WTF" and with limited knowledge they can jump to conclusions or
> become angry. For sake of protection of the reporters, we won't show you
> how much is behind each and every case. People should not judge cases based
> on defenses of the banned person.
>
> More communication and harder job of doing so by trying to explain the
> rationale has been always the cost of more transparency. In English
> Wikipedia lots of times discussions about banning a person can be turned to
> a public circus making these threads like a quiet village in Alps in
> comparison. If you don't believe me, just search in WP:ANI.

Indeed, I don't doubt it. Although I would note that enwikipedia is
orders of magnitude bigger, so it stands to reason that it has a
magnitude more drama. Ultimately though I feel that transparency is
needed to trust that the committee is acting just and wisely (Power
corrupts. Power without oversight is pretty absolute, and you know
what they say about absolute power). I don't believe the committee
will be trusted without public oversight, and I don't think the
committee can function without trust.

>
>> However, MZMcbride has also claimed his comment was in exasperation after
> facing the same breach of the CoC you have cited, from varnent. Given that
> there is a narrative going around that the CoC is unfairly biased in favour
> of staff, would you mind sharing what deliberations took place that
> resulted in sactions against only one of the participants in a dispute
> where both participants are alleged to have committed the same fault. To be
> clear, im not neccesarily sugesting (nor am i neccesarily suggesting the
> converse) that the CoC is wrong in this - only that it seems full
> disclosure of the rationale seems like the only method to heal this rift
> that has opened up.
>
> When they see a violation of CoC, as outlined in the CoC, they need to send
> an email to the committee and explain the reasoning but what happened? They
> publicly accused the other party of violating CoC. This is not how it
> works. It causes more tension and ends up as really long threads. CoC can
> be good mediators in such cases if used.

While I agree that the committee can't act in a situation unless
notified, I disagree that it can only act directly on the notification
received. In fact, I would say the committee has a duty to fully
investigate any conflict it involves itself in. Have you ever seen a
dispute in the history of anything that only involved one party? At
the bare minimum when processing a complaint, the CoC should at least
ask the alleged perpetrator has anything to say for him/herself, no
matter how clear cut the case appears to be. I don't know how anyone
could claim justice is being done without even talking to the accused
party. Given that MZMcbride claimed to initially not know what's going
on, it would certainly appear that no attempt was made to investigate
his side of the situation. Additionally, this seems to be a pattern as
he is not the first person I have heard complain about sanctions being
taken against them without any notification or other communication.


> One big misconception is that lots of people think the other party reported
> them but lots of reports we've had so far came from by-passers and not
> "their enemy" and it's not to retaliate to silence their voice. I really
> encourage this type of behavior. If you see something is not right even if
> it's not related to you, stand up and report.

To clarify, I never meant to suggest otherwise then this. If I did, I apologize.

>
> Regarding unfair 

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread bawolff
On Wednesday, August 8, 2018, Ori Livneh  wrote:
>
>
> On Wed, Aug 8, 2018 at 2:48 PM bawolff  wrote:
>>
>> MZMcbride (and any other individual contributor) is at a power
>> disadvantage here relative to how the foundation is an organized
>> group
>
> Have you been on the receiving end of an MZMcBride diatribe? I was, when
barely two months into my role as a software engineer at the Wikimedia
Foundation (and newly transplanted in the Bay Area), MZMcBride wrote a
Signpost op-ed centered around an inconsiderate remark I made on a bug that
I closed as WONTFIX. The responses to that included on-wiki comments
telling me to go fuck myself, calls for my immediate resignation, and
unbelievably vicious anonymous hate-mail. My mental state after that was
bordering on suicidal.
> I hope that you are struck by the parallels between that affair back in
2012 and the one we are presently discussing. The germ-cell of both cases
was a legitimate grievance about Foundation engineers being dismissive
toward a bug report. MZMcBride has a very good ear for grievances, and he
knows how to use his considerable social clout to draw attention to them,
and then use words as a kind of lightning-rod for stoking outrage and
focusing it on particular targets. I don't know why he does it and I won't
speculate, but I am convinced he knows exactly what he is doing. How could
he not? This has been going on for nearly a decade.
> When I saw MZMcBride's "what the fuck" I instantly knew what was coming.
After it happens to you, you never forget the sensation of instant regret
and absolute panic as the Eye of Sauron fixates on you. It is a
miserable experience and I understand completely why the CoC might feel
compelled to intervene.
>

Im sorry you were on the recieving end of a wikipedian "mob". It is not a
fun experiance.

But i dont see how Mcbride should be held accountable for this. All he did
was write an essay critical of several things the foundation was doing.
Much of it was unrelated to you and about issues that were many years in
the making. Some of his criticism still rings true today. The quote he used
was perhaps mildly removed from context, but it was not wholly pulled out
of context. The op-ed is on the whole much more fair to its subject than
theop-eds I read in my real newspaper about real politics.

If other people did inappropriate things, than they should have been
punished (back in 2012). But i hardly think we should start banning people
because they wrote something sort of seditious. On the contrary I think
internal self criticism of the movement is very important. It is
unfortunate when that criticism falls harshly on a specific contributor,
especially a new one, and it can be annoying to have to consider your
secondary audiance when writing on a task...which are problems i dont have
solutions to.

--
brian
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: My Phabricator account has been disabled

2018-08-08 Thread bawolff
On Wed, Aug 8, 2018 at 8:29 PM, Amir Ladsgroup  wrote:
[...]
> 2) the duration of block which is for one week was determined and
> communicated in the email. You can check the email as it's public now.

Can you be more specific? I'm not sure I see where this is public.


> 3) not being able to discuss cases clearly also bothers me too as I can't
> clarify points. But these secrecy is there for a reason. We have cases of
> sexual harassment in Wikimedia events, do you want us to communicate those
> too? And if not, where and who supposed to draw the line between public and
> non-public cases? I'm very much for more transparency but if we don't iron
> things out before implementing them, it will end up as a disaster.
>


Sure, there are cases where things need to be private to protect
people. This particular case seems like the most clear cut example of
something that does not need to be private. And even in more sensitive
cases, things such as, User X is no longer to do Y for Z time surely
should be public even if some of the details can't be.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread bawolff
If maximizing effectiveness was the only concern, we could just block
all the users.

--
Brian

On Wed, Aug 8, 2018 at 8:12 PM, Ryan Kaldari  wrote:
> Are you suggesting that ArbCom does a good job of maintaining a collegial, 
> harassment-free environment on English Wikipedia? Just wanted to double-check 
> ;)
>
>> On Aug 8, 2018, at 1:02 PM, Isarra Yos  wrote:
>>
>> On other projects, we have community-elected groups among whom we see 
>> oversight in the form of new members upon subsequent elections who can audit 
>> the backlogs, and who conduct their primary functions in the open and issue 
>> clear statements when a matter does indeed merit not discussing openly, 
>> using their discretion as to when to apply privacy and similar concerns 
>> specifically. Generally speaking, most users actually trust their discretion 
>> in those matters.
>>
>> Nothing about /this/ particular issue appears to merit any such concern, and 
>> because none of the above holds here, either, I can't say I necessarily 
>> trust this committee to make that call to begin with.
>>
>> -I
>>
>>> On 08/08/18 19:35, Ryan Kaldari wrote:
>>> With all the clamoring for transparency, has anyone considered the privacy
>>> implications for publicly documenting every complaint against a Phabricator
>>> user? That seems like it could have just as much of a chilling effect on
>>> participation, if not more, than the idea that you can be blocked for being
>>> rude.
>>>
>>>
 On Wed, Aug 8, 2018 at 12:05 PM Yair Rand  wrote:

 I very much agree that profanity should not be used around Wikimedia, but
 there's a large gap between "things we ideally wouldn't have", "things an
 employee of a Wikimedia institution should be fired for", and "things a
 volunteer contributor should be blocked for" (in that order). (The acronym
 "wtf" has been used 532 times on Phabricator according to search results
 (including some by the relevant CoCC members), and 10 times fully spelled
 out.)

 Just to remind everyone of some background, the CoC came into existence
 after having a policy tag edit-warred onto it after a non-consensus-backed
 discussion regarding a particular section was self-closed as consensus
 reached for the entire document, attempting to establish an unaccountable
 and secretive Committee that may ban users for any of a number of extremely
 vaguely worded violations including "attempting to circumvent a decision of
 the Committee", appoints its own members (none of which were
 community-selected), can veto any changes to the CoC, and recently claimed
 absolute authority over all development-oriented spaces on all Wikimedia
 projects (including VPT, gadget/script/module talk pages) on a "consensus"
 of a single user. It's quite clearly a completely illegitimate institution.

 But leaving all that aside, this was a terrible decision. I recommend an
 immediate unblock.

 -- Yair Rand



 2018-08-08 13:02 GMT-04:00 David Cuenca Tudela :

> In general I would prefer to keep vulgar language out of the projects, as
> it doesn't bring anything positive.
> Research shows that swearing causes stress [1], and there are many ways
 of
> showing dissatisfaction without using coarse language.
>
> For instance, I would appreciate if there would be more interest in using
> Nonviolent Communication, as it is more effective in getting the message
> across than with negativity.
> Introduction: https://www.youtube.com/watch?v=M-129JLTjkQ
>
> Regards,
> Micru
>
>
> [1] http://journals.plos.org/plosone/article?id=10.1371/
> journal.pone.0022341
>
>> On Wed, Aug 8, 2018 at 5:53 PM Bináris  wrote:
>>
>> That's what I called a very first world problem.
>> This happens when American culture and behavioral standard is extended
 to
>> an international community.
>> It is not rally polite to write that F-thing (how many times has it
 been
>> written directly or abbreviated or indirectly in this very
 discussion?).
>> But to ban a member of the technical community from the working
> environment
>> is really harmful.
>> Although we do block people from editing Wikipedia, too, but we do it
>> publicly, clearly, comparably, and by the rules of the local community,
> not
>> by hidden rules of admin board. And not for one ugly word.
>> This secret banning undermines the community, and therefore it is
>> destructive.
>>
>> Additionally, as code of conduxt itself was discussed here, the coc
 file
>> case was discussed here a few weeks ago, and this is the place where
 most
>> Phabricatos users communicate,  this is a good place to discuss this
> case,
>> too. Publicity is good.
>> ___
>> Wikitech-l mailing list
>> 

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread bawolff
On Wed, Aug 8, 2018 at 12:53 PM, MZMcBride  wrote:
> Amir Ladsgroup wrote:
>>I disabled the account and now I disabled it again. It's part of a CoC
>>ban. We sent the user an email using the "Email to user" functionality
>>from mediawiki.org the moment I enforced the ban.
>>
>>We rather not to discuss details of cases publicly but I feel this
>>clarification is very much needed.
>
> Ah, I found the e-mail:
>
>> Subject: Temporarily ban from phabricator
>>
>> Hello,
>>
>> We received reports about your comments in phabricator. While we
>>encourage criticism and productive comments to improve the software,
>>comments like "What the fuck" do not contribute to the discussion and
>>turns the discussion from respectful criticism to folks swearing at other
>>folks.
>>
>>
>> We asked you to stop making such comments that do not contribute to the
>>discussion. We have no choice to issue a temporarily ban from
>>phabricator. We hope you notice this type of behaviour is not welcome in
>>our technical spaces.
>>
>> Please read Code of conduct in depth:
>>https://www.mediawiki.org/wiki/Code_of_Conduct
>>
>> Best
>>
>> This email was sent by TechConductCommittee to MZMcBride by the "Email
>>this user" function at MediaWiki. If you reply to this email, your email
>>will be sent directly to the original sender, revealing your email
>>address to them.
>
> This is re: .
>
> Greg Varnum created a mess, inappropriately closed a valid bug, and
> removed its parent task because he didn't want to even acknowledge the
> bug. I expressed exasperation with his actions, particularly gaslighting
> volunteers (cf.
> 
> ), and Greg then removed himself as the task assignee and hasn't responded
> on either the task or the wikimedia-l mailing list since. And there's
> still German text prominently and confusingly at the top of
> . Amazing.
>
> MZMcBride
>
>
>

So MZMcBride is temporarily banned for an unspecified amount of time.
I have some concerns:
a) The fact all these are secret is a recipe for FUD and
misunderstandings. From accusations of partiality of the committee to
people being unblocked because people think its an accident, are all
natural consequences of things being secret. I think this is bound to
create a negative environment in the long term.
b) What is the point of blocking him temporarily and not telling him
how long he's banned for. That's just silly.
c) While I agree that writing wtf can be inappropriate, if the CoC is
going to weigh in to the fray it should fully enforce things on both
sides. MZMcbride (and any other individual contributor) is at a power
disadvantage here relative to how the foundation is an organized
group, and while I can't condone the form of how he expressed his
concerns, he's not wrong that the comments on the bug are poorly
communicated. The bug report seems legit (I've heard many people
complain that the issue is confusing and looks like its a mistake).
First the bug is closed as being a feature not a bug (Although its
still unclear what exactly the aim is). Then the bug is re-closed
because the site is "soft-launched" (I guess that means beta) and the
issue will be fixed later, although that seems kind of contradictory
with the first close reason. This is all very confusing and rather
dismissive of legitimate concerns. I think that if MZMcbride is to be
censored (in the sense of being condemned), than the code of conduct
committee should also attempt to enforce clear communication on this
task to be fair to the fact that some parties in this dispute are at a
power disadvantage.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread bawolff
On Wed, Aug 8, 2018 at 1:22 PM, Dan Garry  wrote:
> On 8 August 2018 at 13:53, MZMcBride  wrote:
>>
>> Ah, I found the e-mail: […]
>>
>
> This mailing list is not an appropriate forum for airing your grievances
> with the way the Code of Conduct Committee has handled this matter.
>
> Dan
>
> --
> Dan Garry
> Lead Product Manager, Editing
> Wikimedia Foundation

I disagree strongly with this. Wikitech-l is the traditional place for
all discussions about mediawiki as an open source project.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deactivating hyperlink by CSS

2018-05-10 Thread bawolff
You could force the link to be behind (in a z-index sense) another
transparent element.

(Not really deactivated because you can still reach it via keyboard,
but it would stop people clicking)
--
Brian

On Thu, May 10, 2018 at 12:15 PM, יגאל חיטרון  wrote:
> Hello. Is there a way to deactivate a hyperlink using CSS? I tried
> pointer-events: none, but wiki does not recognize it. Thank you.
> Igal (User:Ikhitron)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal: Add security researchers to CREDITS file & [[Special:Version/credits]]

2018-05-01 Thread bawolff
The reason I don't want them in the same category is, that:
* I see them as a totally different type of contribution. I think a
security reporter has more in common with a translator than a code
contributor
* The existing credits section is maintained by script based on git
log. The security reporters list will probably have to be hand
maintained

I think the biggest good that came out of eliminating the "developers"
vs "patch contributors" is that the definition of the two groups were
unclear (in the post-svn era. In SVN it was very clear), thus
potentially causing hurt feeling over who deserves to be in which one.
With security reporters, we don't have to worry about that.

Although its possible their could be fighting over what's a valid
security report if we don't define it carefully (An XSS is obviouly a
security report. But there's lots of borderline stuff that gets
reported. Probably the metric should be - do we take action or not
based on the report).

--
Brian

p.s. After posting my initial email, I found out there is a related
phab ticket at https://phabricator.wikimedia.org/T118131

On Tue, May 1, 2018 at 9:28 PM, Eddie Greiner-Petter
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> A while back (cba03a5777) we gave up dividing that file into
> "Developers" and "Patch contributors" - and imho that was a good
> thing. The only sections in the CREDITS file by now are "Contributors"
> and "Translators", where the latter just holds a link to translatewiki.
>
> I'd (slightly) prefer to just add those who reported security issues
> to the "Contributors" section (considering "reported a security issue"
> a contribution) instead of adding a new section - technically someone
> reporting a security issue with a patch attached would be both a
> "Vulnerability Reporter" and a "Contributor", which just seems
> confusing. Besides from bikeshedding about that, I totally agree with
> your proposal.
>
> - --
> Eddie
>
> On 01.05.2018 20:34, Brian Wolff wrote:
>> Hi everyone,
>>
>> Currently we only credit people who report security vulnerabilities
>> at https://www.mediawiki.org/wiki/Wikimedia_Security_Team/Thanks
>> (which basically nobody reads or knows exists) and sometimes in the
>> commit message and release announcements. Given such people are
>> instrumental in keeping MediaWiki secure, I think we should also
>> credit them in the CREDITS file. I propose adding another section
>> to the file - "Vulnerability Reporters", listing the names of
>> everyone who has reported a security vulnerability in either
>> MediaWiki or a bundled extension.
>>
>> Thoughts?
>>
>> -- Brian ___ Wikitech-l
>> mailing list Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCAAdFiEE/zqKboUFrd4f9T4zA/bLnFtzmKEFAlro2/UACgkQA/bLnFtz
> mKHlUA//SUKpGwRUtxpkxm46T8wrwnBfSamwK7hRfv4bvAyzmyAk2YAFxh3GVvji
> qUuabrnARdQn4/HgfNXqe09rPUPXrESX+Blp5JCxKQuJzgrgBeqMYlnR4JbVsA0A
> ITvyTlrUKAmDJd7pjCnb+MKzd9qroTLU6PWwCh0ln0ihrx9syhzZAcNW3BB+D24B
> EYHx4i7VBWWFnFgzgdif7hjO4JJ6gZvGKZaUDNkZ4ZOyRdY/+OpxRx1jqhhMDauZ
> dHwk17yQYkeC9+z+GBicdtwwLs9AKbq0mz7P4DkCe6fUbtsyAlAWYB8Z8qSCvfwP
> p1CFo+7L5sdc3dEq8xLhHQNRBfzOg7WMDq9T1vfaR9kxHhrfA/PPu8EFcNAMiiLe
> hmHxZaKGRqB48eJGZMYUv9OAxB5fA+tUp/NdMhchkOtH1Zq1mOWv2JBzcfIm1uUY
> POsFL1lgghsU9GEyRMa7EPkiFIYzHYs7OuGJUybXfaL2fGxh+zaWHVWfBjmvMABL
> tL7MyY8aFUegkvod1vQIocAsBVCRx5TVibLs8WAkVfnKE7wr55msgknt/JZbiqqO
> poHv0Vluvd3A86L7P17zUX/p3vo50psBv/A+0yPq0xwaosrumU+yHKzBUF2hKl8r
> e6RcRA0ElzAwej6VRoErB+HkJXi+EDJdQADatB84hL9sTJi3TFg=
> =0KkP
> -END PGP SIGNATURE-
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Masaryk University mirrors of Wikipedia XML dumps

2018-03-13 Thread bawolff
On Wed, Mar 14, 2018 at 3:25 AM, defanor  wrote:
> Hello,
>
>  currently lists Masaryk
> University mirrors, which are inactive (mirroring there was
> intentionally stopped several years ago, it's not a temporary issue). It
> would be nice to update the list.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Thanks,

I submitted https://gerrit.wikimedia.org/r/#/c/419344/ for this.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Recursive Common Table expressions @ Wikimedia [was Fwd: [Wikimedia-l] What's making you happy this week? (Week of 18 February 2018)]

2018-02-28 Thread bawolff
On Wed, Feb 28, 2018 at 5:07 PM, Jaime Crespo  wrote:
> On Wed, Feb 28, 2018 at 5:26 PM, Brad Jorsch (Anomie) > wrote:
>
>> On Wed, Feb 28, 2018 at 8:47 AM, Jaime Crespo 
>> wrote:
>>
>> > Very recently I have been experimenting with recursive Common Table
>> > Expressions [2], which are or will be available on the latest versions of
>> > MySQL and MariaDB.
>> >
>>
>> Do the other databases MediaWiki tries to support have that feature?
>>
>
> Actually, MySQL/MariaDB is the *last* database to conform to the sql:1999
> WITH standard: https://modern-sql.com/feature/with#compatibility Even
> sqlite suported a limited set of those!
>
> The good news is that, probably because it arrived last, it got a pretty
> feature-full implementation:
> https://twitter.com/MarkusWinand/status/852862475699707904
>
>
>>
>> > With a single query on can obtain all titles directly or indirectly in a
>> > category:
>> >
>> > WITH RECURSIVE cte (cl_from, cl_type) AS
>> > (
>> > SELECT cl_from, cl_type FROM categorylinks WHERE cl_to =
>> > 'Database_management_systems' -- starting category
>> > UNION
>> > SELECT categorylinks.cl_from, categorylinks.cl_type FROM cte JOIN
>> page
>> > ON
>> > cl_from = page_id JOIN categorylinks ON page_title = cl_to WHERE
>> > cte.cl_type
>> > = 'subcat' -- subcat addition on each iteration
>> > )
>> > SELECT page_title FROM cte JOIN page ON cl_from = page_id WHERE
>> > page_namespace = 0 ORDER BY page_title; -- printing only articles in the
>> > end
>> > , ordered by title
>> >
>>
>> Does that work efficiently on huge categories, or does it wind up fetching
>> millions of rows and filesorting?
>
>
> Needs more testing-- that query worked for me well enough on my laptop to
> expose it directly on a webrequest (<0.1 s), but I only imported the
> categorylinks and page tables, so I was working with memory. Obviously, the
> more complex the query, and the more results it returns, the less likely it
> is to be able to be exposed to, e.g. a public API. But at least there are
> configurable limits on recursivitiy and max execution time.
>
> Honestly, given it is a new feature, I don't expect mediawiki --which at
> the moment has to support 5.5- to embrace it any time soon. However, I
> wanted to ask if it was interesting enough to setup some test hosts for
> mediawiki to "play" with it --e.g. evaluate performance--, and maybe (?)
> some upgraded mariadb/mysql servers for WMF labsdb (for long-running
> analytics or gadgets that generates reports).
>

I certainly think labsdb users would welcome this feature. Traversing
category trees (Or sometimes even the graph of page links) is
certainly a common thing that people want to do.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FLIF for Wikimedia

2017-12-11 Thread bawolff
To be clear, there are generally no objections to "1) accept FLIF (and possibly
serve PNG thumbs for browsers without js" other than convince commons
it would be a good idea to accept the format. All the controversial
bit is converting files to FLIF. Accepting FLIF files for upload is
non-controversial if the communities want them.

--
Brian

On Mon, Dec 11, 2017 at 5:30 PM, Chico Venancio
 wrote:
> I concur with the extension idea.
> I'd add that have options for degrees of using FLIF would be a good idea as
> well. I.E. the decision could be to simply 1) accept FLIF (and possibly
> serve PNG thumbs for browsers without js), to 2) encourage FLIF use, or to
> 3)"force" FLIF by converting everything to FLIF and serving only FLIF
> versions to browsers.
>
> Even option 1 seems unlikely to gather support at this point, but it is a
> lot more likely than option 3.
>
> Chico Venancio
>
> 2017-12-11 14:22 GMT-03:00 Bartosz Dziewoński :
>
>> If you want to work on implementing support for FLIF, I would recommend
>> doing it as an extension (similar to e.g. https://www.mediawiki.org/wiki
>> /Extension:PdfHandler) rather than in MediaWiki core.
>>
>> --
>> Bartosz Dziewoński
>>
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] AdvancedSearch beta feature now on Mediawiki

2017-11-21 Thread bawolff
Just for reference, you can upload files to phab from mobile by going
to http://phabricator.wikimedia.org/file/upload (Yeah, its the most
hidden thing ever)

--
bawolff

On Tue, Nov 21, 2017 at 11:47 PM, יגאל חיטרון <khit...@gmail.com> wrote:
> Hello, Birgit. Unfortunately, I can't open a phab ticket, because I'm on
> mobile, and there is no way to upload a file to phabricator from mobile.
> So, I'll answer you here. I'm on Lollipop, use internal browser, timeless
> skin. I can't make a screenshot from this device, so I did it on another
> one, almost the same, just another OS version. And the result looks
> completely different there, but still broken, see the file attached, all
> the section captures are almost hidden. Hope it helps,
> Igal.
>
>
> On Nov 22, 2017 01:12, "Birgit Müller" <birgit.muel...@wikimedia.de> wrote:
>
>> Hi Igal,
>>
>> thanks for letting us know! Could you maybe give a bit more details (e.g.
>> the skin you use, operating system) and make a screenshot so that we can
>> look into it?
>>
>> Best place to report that would be directly in Phabricator, project tag is
>> https://phabricator.wikimedia.org/tag/advanced-search/.
>>
>> Thans and cheers,
>> Birgit
>>
>>
>>
>> 2017-11-21 23:51 GMT+01:00 יגאל חיטרון <khit...@gmail.com>:
>>
>> > Hello. Great idea indeed. But it does not work. I just opted in, and I
>> can
>> > only see broken rectangles and partial captures. Looks like a huge css
>> > problem.
>> > User:IKhitron (Igal)
>> >
>> >
>> >
>> > On Nov 21, 2017 23:02, "Birgit Müller" <birgit.muel...@wikimedia.de>
>> > wrote:
>> >
>> > (sorry for cross-posting ...)
>> >
>> >
>> > Hello all,
>> >
>> > we're really happy to announce that the new AdvancedSearch interface got
>> > deployed as a beta feature to Mediawiki.org and test just 2 hours ago.
>> [1]
>> >
>> > AdvancedSearch enhances Special:Search through an advanced parameters
>> form
>> > and aims to make existing search options more visible and accessible for
>> > everyone. [2]
>> >
>> > The feature is a project by WMDE's Tech team and originates from the
>> > Technical Wishes project. Many other people and teams helped making this
>> > project happen, so stay tuned for the thank you section at the end of
>> this
>> > email :-) [3]
>> >
>> > *Why AdvancedSearch*
>> >
>> > The Search has great options to perform advanced queries, but often even
>> > experienced editors don't know about it - this is what we found out when
>> we
>> > were conducting a workshop series on advanced search in several cities in
>> > Germany in 2016. Together with contributors from German Wikipedia we've
>> > discussed their desired search queries, explained how the syntax works
>> and
>> > how keywords like "hastemplate", "filetype" or "intitle" can be used and
>> > combined to get the desired results.
>> >
>> > The idea for the AdvancedSearch feature results out of these workshops,
>> > where we not only discussed search but also designed first mocks for an
>> > advanced search interface. [4]
>> >
>> >
>> >
>> > *The first version, more deployments and other next steps*AdvancedSearch
>> > supports some of the special search options the WMF's search team has
>> been
>> > implemented in the last years. The way the interface works, users don't
>> > have to know the syntax behind each search field, but they can learn
>> about
>> > it if they want to. The first version of the feature comes with a first
>> > selection of advanced search options, e.g. including support for
>> > "hastemplate" or "intitle". The WMF's search team has started to work on
>> a
>> > 'deepcat' functionality to make sub category search happen. We plan to
>> add
>> > support for this in the future, too.
>> >
>> > If all goes well, we plan to deploy the beta feature on German and Arabic
>> > Wikipedia by Wednesday, Nov 29. [5]
>> >
>> > In the next 2-3 months we'd love to invite everyone to test the new
>> feature
>> > on those wikis: Comments, thoughs, bug reports ... - any feedback is much
>> > appreciated!
>> >
>> > Please see https://www.mediawiki.org/wiki/Help:AdvancedSearch for how to
>> > use AdvancedSearch.
>> >
>> > Deployme

Re: [Wikitech-l] Security release: 1.29.2 / 1.28.3 / 1.27.4

2017-11-15 Thread bawolff
Hi, just a quick update - The tags are now in git.

You can now download from git using the 1.29.2, 1.28.3 or 1.27.4 tags

Alternatively you can use the REL1_29, REL1_28 or REL1_27 branches if
you like. The difference between the branch and the tag, is the tag
contains exactly what was released in the tarball, where the REL1_XX
branches will be continously updated with any minor bugfixes that
happen between official releases.

Apologies for the delay on tagging, phpcs was complaing about some
code style issues in the release, and we spent a lot of time fighting
with Wikimedia continuous integration infrastructure to fix those
issues.

--
Brian

On Wed, Nov 15, 2017 at 4:43 PM, Greg Rundlett (freephile)
 wrote:
>>
>>
>> > There is no corresponding Git tags 1.29.2, 1.28.3, 1.27.4, could someone
>> > issue them?
>> >
>> > I guess they are respectively ee7f9fe, 5b85506, a806476.
>>
>
> Thanks Seb
>
>
> Greg Rundlett
> https://eQuality-Tech.com 
> https://freephile.org
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] need a URL encoding expert

2017-10-20 Thread bawolff
HTML4 reccomended people use ; instead of & to separate url
parameters, to avoid conflicts with entity references. However, afaik
most web servers don't support this (I think its mostly some java
things that do). See
https://www.w3.org/TR/1999/REC-html401-19991224/appendix/notes.html#h-B.2.2
Modern HTML5 abandoned this reccomendation afaik.

--
Brian

On Fri, Oct 20, 2017 at 8:07 PM, Ryan Kaldari  wrote:
> Could someone knowledgeable about URL encoding take a look at this pull
> request? Thanks!
> https://github.com/wikimedia/DeadlinkChecker/pull/26/files
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: New newsletter: Tech Showcase

2017-10-13 Thread bawolff
On Fri, Oct 13, 2017 at 5:22 PM, Jérémie Roquet  wrote:
> Dear Quim,
>
> 2017-10-13 12:40 GMT+02:00 Quim Gil :
>> TLDR: Please give this a try, subscribe, and forward the link to other
>> mailing lists or projects: https://www.mediawiki.org/
>> wiki/Newsletter:Tech_Showcase.
>>
>> The simplest and easiest way to know
>> about interesting software being developed by staff or volunteers for our
>> readers and our contributors.
>>
>> No tech knowledge needed and no time commitment required either. You will
>> just receive one sentence notifications linking to the fresh software
>> showcased.
>
> Thank you!
>
> Do you think it would be possible somehow to receive the actual
> content, either by RSS (preferred) or by email (fine as well)?
>
> Here, I'm getting an email with only a link to a newsletter with only
> a link to the actual content. That's two levels of indirection with
> little information on what the notification is about, nor any way to
> easily keep the content at hand for reading later.
>
> We had a discussion about the ambassadors' announce format a few
> months ago (was it with Benoît or Johan? I think Benoît, but I'm not
> sure) and I already underlined how the mailing-list and RSS formats
> currently in use were great for people who have to deal with hundreds
> of notifications per week.
>
> I understand that this would mean more work for whoever writes the
> newsletter and that the mediawiki newsletter extension may not yet
> provide a way to send more content than just a title, so I'd be
> perfectly fine with a negative answer. I just want to draw attention
> to the the added cost to access the information.
>
> Thanks again and best regards,
>

These are definitely features that would be possible to implement in
Newsletter extension. Of course someone would have to do them, and I
don't think very many people are working on the extension right now.

The RSS part already has a bug
https://phabricator.wikimedia.org/T173041 . I'd suggest filing a bug
for the better email notice part.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Passing to {{ping}} a list of user stored in a template on Meta

2017-09-24 Thread bawolff
Why not just make a template containing {{ping|first
user|second user|third user|...}}

Your issue is almost certainly that the pipes aren't being tokenized
as argument separators when they come from a transcluded template.
(Its the same reason that {{!}} works in tables, except in reverse).

Alternatively, {{ping|{{subst::Wiktionary/Tremendous Wiktionary User
Group/affiliates would probably work.

--
Brian

On Sun, Sep 24, 2017 at 3:41 PM, mathieu stumpf guntz
 wrote:
> I'm trying to solve the problem exposed here:
> https://meta.wikimedia.org/wiki/User_talk:Psychoslave#Template:Participants
> (in French)
>
> In a nutshell, the goal is to be able to ping a group of user on meta.
> so ideally, you just type something like {{ping|my_group}}
> But as ping and the underlying "Module:Reply_to" are expecting one user per
> argument
> my current idea is to make something like {{ping|{{:/some_group/members
> so instead of
> {{ping|Amqui|Ariel1024|Aryamanarora|Benoît_Prieur|Daniel_Kinzler_(WMDE)|Delarouvraie|Epantaleo|Ernest-Mtl|GastelEtzwane|JackPotte|Jberkel|Jitrixis|Kimdime|LA2|LaMèreVeille|Lydia_Pintscher_(WMDE)|Lyokoï|M0tty|Malaysiaboy|Marcmiquel|Micru|Nattes_à_chat|Nemo_bis|Noé|Otourly|Pamputt|psychoslave|Rich_Farmbrough|Rodelar|Satdeep_Gill|Sebleouf|Shavtay|Stalinjeet|S_The_Singer|TAKASUGI_Shinji|TaronjaSatsuma|Thibaut120094|Thiemo_Mättig_(WMDE)|tpt|Trizek_(WMF)|VIGNERON|Vive_la_Rosière|Xabier_Cañas|Xenophôn}}
>
> I can just write
> {{ping|{{:Wiktionary/Tremendous Wiktionary User Group/affiliates for
> example
>
> but even if I put verbatim in a template:
> Amqui|Ariel1024|Aryamanarora|Benoît_Prieur|Daniel_Kinzler_(WMDE)|Delarouvraie|Epantaleo|Ernest-Mtl|GastelEtzwane|JackPotte|Jberkel|Jitrixis|Kimdime|LA2|LaMèreVeille|Lydia_Pintscher_(WMDE)|Lyokoï|M0tty|Malaysiaboy|Marcmiquel|Micru|Nattes_à_chat|Nemo_bis|Noé|Otourly|Pamputt|psychoslave|Rich_Farmbrough|Rodelar|Satdeep_Gill|Sebleouf|Shavtay|Stalinjeet|S_The_Singer|TAKASUGI_Shinji|TaronjaSatsuma|Thibaut120094|Thiemo_Mättig_(WMDE)|tpt|Trizek_(WMF)|VIGNERON|Vive_la_Rosière|Xabier_Cañas|Xenophôn
> in the page
>
> and call ping with this template as argumement, it will end up with a "Error
> in Template:Reply to: Input contains forbidden characters."
>
> This message is generated by the module Reply_to but I can't change it since
> it's protected.
>
> So I'm looking for a way to bypass whatever generate this error, or more
> broadly any idea to resolve the exposed problem.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-19 Thread bawolff
On Tue, Sep 19, 2017 at 3:21 PM, C. Scott Ananian
 wrote:
> On Mon, Sep 18, 2017 at 9:51 PM, Chad  wrote:
>
>> I see zero reason for us to go through all the formalities, unless we want
>> to really. I have yet to see anyone (on list, or on IRC anywhere at all
>> today) where anyone suggested (2) was a good idea at all. It's a
>> horrifically bad idea.
>>
>
> Technically, I did outline the arguments for (2), earlier on this thread.
> It was a bit allegorical, though:
> https://www.mediawiki.org/wiki/Wikimedia_Developer_Summit/2018/Writing_Tips/Examples#Example_of_a_.22medium_level.22_position_statement
> This should be seriously considered, not just dismissed.
>
> I agree the short timeline seems to push us toward reverting to Zend.  But
> it is worth having a meaningful discussion about the long-term outlook.
> Which VM is likely to be better supported in 15 years' time?  Which VM
> would we rather adopt and maintain indefinitely ourselves, if needed --
> since in a 15 yr timeframe it's entirely possible that (a) Facebook could
> abandon Hack/HHVM, or (b) the PHP Zend team could implode.  Maintaining
> control over our core runtime is important; I think we should at least
> discuss long-term contingencies if either goes down. Obviously, our future
> was most stable when we (briefly!) had a choice between two strong
> runtimes... but that opportunity seems to be vanishing.  Practically
> speaking, it's not really a choice between "lock-in" and "no lock in" -- we
> have to choose to align our futures with either Zend Technologies Ltd or
> Facebook.  One of these is *much* better funded than the other.  It is
> likely that the project with the most funding will continue to have the
> better performance.
>
> There are other big users of HHVM -- do we know what other members of the
> larger community are doing?  We've heard that Phabricator intends to follow
> PHP 7.  Etsy also shifted to HHVM, do we know what their plans are?
>   --scott
>
> --
> (http://cscott.net)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

I disagree. I don't think its useful or possible to try to forecast
technical trends 15 years out. 15 years from now, it is just as likely
that facebook will be as relevant as myspace is today, as it is that
facebook will go full cyberpunk dystopia on us and rule the world. I
don't think we can realistically predict anything 15 years out.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Recommending Firefox to users using legacy browsers?

2017-08-31 Thread bawolff
On Thu, Aug 31, 2017 at 8:14 PM, Legoktm  wrote:
> Hello,
>
> This was something that came up during "The Big Open" at Wikimania, when
> Katherine Maher talked with Ryan Merkley (CEO of Creative Commons) and
> Mark Surman (ED of Mozilla Foundation). One of the themes mentioned was
> that our projects need to work together and support each other.
>
> In that vein, I'm interested in what people think about promoting
> Firefox to users who are using legacy browsers that we don't support at
> Grade A (or some other criteria). As part of the "drop IE8 on XP"
> project[1] we're already promoting Firefox as the alternative option. I
> was imagining it could be a small and unobtrusive bubble
> notification[2], similar to those that Google pushes Chrome on people with.
>
> If users use modern browsers, they're going to have better security
> support, and most likely a better experience browsing Wikimedia sites
> too. We'd be improving the web by reducing legacy browsers, and allowing
> us to move forward with newer technology sooner (ideally).
>
> And we'd be supporting a project that is ideologically aligned with us:
> Mozilla.
>
> Thoughts, opinions?
>
> [1] https://phabricator.wikimedia.org/T147199
> [2] https://www.mediawiki.org/wiki/Bubble_notifications
>
> Thanks,
> -- Legoktm
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

I'm concerned this would be seen as an inapropriate bias.

Suggesting Firefox for IE8 on XP makes sense because it is basically
the only option for that platform that is reasonably secure and not
super obscure. Promoting firefox is general for legacy browsers seems
like a slippery slope to me.

Additionally, I think this is more a political than a technical
decision, and one that would require consultation with the general
Wikimedia community (e.g. Meta RFC).

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Timeless skin now available on mw.org

2017-08-12 Thread bawolff
This is kind of odd, but probably not an issue with the skin.

I suspect you have some of the MediaWiki pages cached in browser cache
from before you used the skin, and as a result are using them via a
304 response. But the user_touched timestamp should prevent that.
Maybe central auth doesn't update the local one or something. need to
look into that further.

Maybe you got logged out  and then central auth globally logged back in.

I don't know, kind of grasping here, but definitely is a MW issue not
a Timeless issue.

--
bawolff

On Sat, Aug 12, 2017 at 7:38 PM, יגאל חיטרון <khit...@gmail.com> wrote:
> Hi. I did not expected to much, because I did not like any non vector
> skins, but this is awesome! Thank you very much. One little bug - when
> working in mediawiki and navigating to another page, sometimes it opens
> again in vector and you should reopt it in in preferences.
> Igal (User:IKhitron)
>
> On Aug 12, 2017 17:51, "Alangi Derick" <alangider...@gmail.com> wrote:
>
>> Wow, this is wonderful.
>>
>> I like this skin and I already switched to it. A lot of stylish minded
>> developers (like me) will start moving to the Timeless skin soon. Good work
>> Team
>>
>> Nevertheless, will file in any bugs found :)
>>
>> *Kind regards*
>> *Alangi Derick N*
>>
>> *[image: https://twitter.com/AlangiDerick]
>> <https://twitter.com/AlangiDerick>
>> <https://www.facebook.com/derick.alangi>  *
>>
>> On Sat, Aug 12, 2017 at 3:46 PM, Daniel Kinzler <
>> daniel.kinz...@wikimedia.de
>> > wrote:
>>
>> > Wow, this looks great!
>> >
>> > Am 11.08.2017 um 18:26 schrieb Isarra Yos:
>> > > The Timeless skin has been deployed to test, test2, and mw.org. Anyone
>> > > interested, please go ahead an enable it and find everything wrong and
>> > swear at
>> > > me about it.
>> > >
>> > > Note that the current design is not the final design and I haven't
>> quite
>> > worked
>> > > out what the final design should be, either, so feel free to swear at
>> me
>> > about
>> > > that, too, if you have any ideas/whatever.
>> > >
>> > > * Bad documentation: https://www.mediawiki.org/wiki/Skin:Timeless
>> > > * Bugs: https://phabricator.wikimedia.org/tag/timeless/
>> > > * To enable it:
>> > > https://www.mediawiki.org/wiki/Special:Preferences#mw-
>> > prefsection-rendering
>> > >
>> > > If all goes well, we will move on to deploy it to the pilot wikis, as
>> > listed on
>> > > the deployment task, sometime next week: https://phabricator.wikimedia
>> .
>> > org/T154371
>> > >
>> > > A huge thank you to all the insane people who have worked to get us to
>> > this
>> > > point, when the entire notion of a new skin seemed something of an
>> > impossibility
>> > > most of the way through.
>> > >
>> > > -I
>> > >
>> > >
>> > > ___
>> > > Wikitech-l mailing list
>> > > Wikitech-l@lists.wikimedia.org
>> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> >
>> >
>> > --
>> > Daniel Kinzler
>> > Principal Platform Engineer
>> >
>> > Wikimedia Deutschland
>> > Gesellschaft zur Förderung Freien Wissens e.V.
>> >
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Historical use of latin1 fields in MySQL

2017-05-02 Thread bawolff
On Tue, May 2, 2017 at 8:05 PM, Jaime Crespo  wrote:
> On Tue, May 2, 2017 at 9:24 PM, Brian Wolff  wrote:
>
>> .
>> >
>> > On the latest discussions, there are proposals to increase the minimum
>> > mediawiki requirements to MySQL/MariaDB 5.5 and allow binary or utf8mb4
>> > (not utf8, 3 byte utf8), https://phabricator.wikimedia.org/T161232.
>> Utf8mb4
>> > should be enough for most uses (utf8 will not allow for emojis, for
>> > example), although I am not up to date with the latest unicode standard
>> > changes and MySQL features supporting them.
>> >
>>
>> I dont know about mysql, but in unicode emojis are like any other astral
>> character, and utf-8 can encode them in 4 bytes*.
>>
>
> I am sorry I wasn't clear before, MySQL's utf8 IS NOT international
> standard generally known as UTF-8, it is a bastardization of 3-byte max
> UTF-8. MySQL's utf8mb4 is UTF-8:
>
> Proof:
>
> ```
> mysql> use test
> Database changed
> mysql> CREATE TABLE test (a char(1) CHARSET utf8, b char(1) CHARSET
> utf8mb4, c binary(4));
> Query OK, 0 rows affected (0.02 sec)
>
> mysql> SET NAMES utf8mb4;
> Query OK, 0 rows affected (0.00 sec)
>
> mysql> insert into test VALUES ('\U+1F4A9', '\U+1F4A9', '\U+1F4A9');
> Query OK, 1 row affected, 1 warning (0.00 sec)
>
> mysql> SHOW WARNINGS;
> +-+--++
> | Level   | Code | Message
>|
> +-+--++
> | Warning | 1366 | Incorrect string value: '\xF0\x9F\x92\xA9' for column
> 'a' at row 1 |
> +-+--++
> 1 row in set (0.01 sec)
>
> mysql> SELECT * FROM test;
> +--+--+--+
> | a| b| c|
> +--+--+--+
> | ?|  |  | -- you will need an emoji-compatible font here
> +--+--+--+
> 1 row in set (0.00 sec)
>
> mysql> SELECT hex(a), hex(b), hex(c) FROM test;
> ++--+--+
> | hex(a) | hex(b)   | hex(c)   |
> ++--+--+
> | 3F | F09F92A9 | F09F92A9 |
> ++--+--+
> 1 row in set (0.00 sec)
> ```
>
> To avoid truncations:
>
> ```
> mysql> set sql_mode='TRADITIONAL'; --
> https://phabricator.wikimedia.org/T108255
> Query OK, 0 rows affected (0.00 sec)
>
> mysql> insert into test VALUES ('\U+1F4A9', '\U+1F4A9', '\U+1F4A9');
> ERROR 1366 (22007): Incorrect string value: '\xF0\x9F\x92\xA9' for column
> 'a' at row 1
> ```
>
> More info at:
> https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8.html vs.
> https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8mb4.html
>
>
> --
> Jaime Crespo
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Oh my bad, I had misread your previous email. I thought you were
talking about emoiji's in utf8mb4.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia P2P

2017-03-27 Thread bawolff
Hi,

There's been some previous discussion about this and similar ideas on the list:

* https://lists.wikimedia.org/pipermail/wikitech-l/2015-November/084143.html
* https://lists.wikimedia.org/pipermail/wikitech-l/2016-November/087079.html

My personal view is that such proposals usually aren't worth it (I
haven't looked at yours specifically so this might not apply to you)
because:
* Most users expect very quick cache invalidation times (e.g. on the
order of seconds)
* Uncached views are generally cheap (From the server perspective), so
the benefits aren't that much.

--
bawolff
On Mon, Mar 27, 2017 at 2:53 PM, Carlos Guerrero
<guerrerocar...@gmail.com> wrote:
> Hello,
>
> I was directed to this list by Charles M. Roslof (Wikimedia Foundation Legal
> Counsel) he asked me to remove all wikipedia branding from "
> https://www.WikipediaP2P.org/;, so I did, and it is now renamed  "
> https://guerrerocarlos.github.io/WikiP2P.org/;
>
> He told me this would be the right place to propose what the extension does
> and see if it can be useful to wikipedia in any way.
>
> I believe that the "WikiP2P" extension has a lot of potential to help
> wikipedia be more bandwidth efficient for people with limited access to
> Internet.
>
> So I invite you to check any of those links it and tell me if this is
> really the right list.
>
> Thanks in advance for your time and dedication.
>
> Best Regards
> - Carlos
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] AMD petition

2017-03-13 Thread bawolff
With all due respect, and as much as I support things like this, this
is offtopic for wikitech-l.

--
Brian

On Mon, Mar 13, 2017 at 10:03 PM, James Salsman  wrote:
> Please join me in asking AMD to open-source the PSP (backdoor) in
> their chips -- a chance to regain secure x86 hardware.
>
> https://www.change.org/p/advanced-micro-devices-amd-release-the-source-code-for-the-secure-processor-psp
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ArchCom Monutes, News, and Outlook

2017-03-12 Thread bawolff
On Fri, Mar 10, 2017 at 8:06 PM, Legoktm <legoktm.wikipe...@gmail.com> wrote:
> Hi,
>
> On 03/09/2017 08:17 AM, Daniel Kinzler wrote:
>> * NOTE: we plan to experiment with having a public HANGOUT meeting, instead 
>> of
>> using IRC.
>
> Can I ask why? At least for me, Google Hangouts simply isn't an option
> to participate when I'm in a crowded library/classroom.
>
> -- Legoktm
>

+1 to this being inconvenient. I don't always attend arch com
meetings, but usually do if I happen to be online during the time. If
its a hangout, it is extremely unlikely I would attend unless I was
specifically proposing an RFC.

--
Bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] please critique my grant application

2017-03-12 Thread bawolff
A short clear self-contained abstract at the top of the grant request
would be a good start.

--
Bawolff

On Mon, Mar 13, 2017 at 3:54 AM, James Salsman <jsals...@gmail.com> wrote:
> Please critique and endorse my grant application, especially after Doc
> James replaces his name as the applicant so I can be the adviser and
> my Google Summer of Code co-mentor and student can be co-grantees:
> https://meta.wikimedia.org/wiki/Grants:Project/Intelligibility_Transcriptions
>
> Thank you!
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki core has reached to its 400th contributor

2017-02-07 Thread bawolff
Actually, that's probably double counting svn contributors, since svn
contributors are in the git history under a different email then they
currently used (If they are still around).

Nonetheless, that's a nice milestone.

--
Brian

On Tue, Feb 7, 2017 at 7:56 AM, Amir Ladsgroup  wrote:
> Hey,
> Today I was checking mediawiki/core in github:
> https://github.com/wikimedia/mediawiki
> And number of contributors is now 400. It might be a little more if we
> count SVN era.
> To me, it's an important milestone. More important than number of commits
> or releases. It's about people not numbers or lines of code.
>
> Best
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How to clear backlinks cache?

2017-01-24 Thread bawolff
This probably indicates that the links aren't being appropriately
added via $parserOutput->addLink().

Check that Special:Whatlinkshere for the red links is correct.

--
Brian

On Tue, Jan 24, 2017 at 7:40 PM, Victor Porton  wrote:
> The red/blue status of links with "arrows" produced by my
> NamespacePopups extension is not updated on my site after I create a
> new page to which the arrow link points (and so it should change from
> red to blue).
>
> Example of my page with arrows:
> https://withoutvowels.org/wiki/Tanakh:Genesis_1:7
>
> I think this is because of the backlinks cache.
>
> I have 23253 pages in the namespace Tanakh:. Cache of links from every
> of them should be updated. (It seems that currently the cache contains
> only old regular links not my new "arrow" links.)
>
> Please help me to update all these pages. The thing I really need is to
> re-process every of these pages with ParserAfterTidy hook. (My hook
> adds the arrow keys to backlinks cache.)
>
> My current workaround is to run maintenance/update.php periodically.
> This updates the red links to blue as appropriate, but this does not
> help to make blue links after future edits.
>
> Note that I run maintenance/runJobs.php every minute. The software
> works correctly with these pages which I recently edited, but not with
> constant non-editable pages in Tanakh: namespace.
>
> Note that it all works well if I edit the page with my links. But I
> cannot edit all pages in Tanakh: namespace both because they are a
> multitude and because they are non-editable.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help me to fix a bug in my extension

2017-01-06 Thread bawolff
Hi,

Sorry, I said the wrong hook. ParserAfterParse comes too early in the
parse process. Try using ParserAfterTidy instead.

Also, in your onHtmlPageLinkRendererEnd function, instead of doing
things like $url = preg_replace( '/\$1/', $page, $wgArticlePath );,
you should probably do something like Title::newFromText( $page
)->getLocalUrl();

Also be careful with lines like:
  Title::newFromLinkTarget( Title::newFromText( $page ) )
Since Title::newFromText() may return false for invalid input (e.g. if
someone put [[#foo]] on a page).

--
Brian

On Fri, Jan 6, 2017 at 6:46 PM, Victor Porton <por...@narod.ru> wrote:
> ParserAfterParse does not work either:
> https://gerrit.wikimedia.org/r/#/c/330967/
>
> It is run four times when saving an edited page, but every four times
> the list of links (which I want to amend) is empty:
>
> 2017-01-06 17:43:00 victor.local my_wiki: array(0) {
> }
> 2017-01-06 17:43:00 victor.local my_wiki: array(0) {
> }
> 2017-01-06 17:43:01 victor.local my_wiki: array(0) {
> }
> 2017-01-06 17:43:01 victor.local my_wiki: array(0) {
> }
>
> I need the list of existing links to amend it. How?
>
> On Fri, 2017-01-06 at 04:09 +, bawolff wrote:
>> Hi,
>>
>> You shouldn't modify the contents of the ParserOutput object from any
>> OutputPage related hook, as MediaWiki's cache system won't take the
>> new version into account correctly. Instead you should use a hook
>> called from the parser. For your use case I would suggest the
>> ParserAfterParse.
>>
>> So try changing that hook to use ParserAfterParse instead of
>> OutputPageParserOutput  (The first argument to ParserAfterParse is a
>> Parser object, call $parser->getOutput() to get the appropriate
>> ParserOutput object, and then do the exact same thing as before,
>> except delete the stuff about LinksUpdate, since MW will handle
>> LinksUpdate itself).
>>
>> Hope that helps
>> --
>> Brian
>>
>> On Thu, Jan 5, 2017 at 11:48 PM, Victor Porton <por...@narod.ru>
>> wrote:
>> > My extension does add additional links (which it is created to
>> > render
>> > in addition to the normal [[...]] links) to the pagelinks table for
>> > the
>> > edited page every even edit and erroneously removes them back every
>> > odd
>> > edit.
>> >
>> > Please help to debug this silly behavior:
>> >
>> > https://gerrit.wikimedia.org/r/#/c/330816/1 is the patch which
>> > half-
>> > works.
>> >
>> > My extension:
>> > https://www.mediawiki.org/wiki/Extension:NamespacePopups
>> >
>> > It uses my another extension:
>> > https://www.mediawiki.org/wiki/Extension:PagePopups
>> >
>> > This is very important for me.
>> >
>> > Well, maybe some day we'll use these extensions in Wikipedia, who
>> > knows.
>> >
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help me to fix a bug in my extension

2017-01-05 Thread bawolff
Hi,

You shouldn't modify the contents of the ParserOutput object from any
OutputPage related hook, as MediaWiki's cache system won't take the
new version into account correctly. Instead you should use a hook
called from the parser. For your use case I would suggest the
ParserAfterParse.

So try changing that hook to use ParserAfterParse instead of
OutputPageParserOutput  (The first argument to ParserAfterParse is a
Parser object, call $parser->getOutput() to get the appropriate
ParserOutput object, and then do the exact same thing as before,
except delete the stuff about LinksUpdate, since MW will handle
LinksUpdate itself).

Hope that helps
--
Brian

On Thu, Jan 5, 2017 at 11:48 PM, Victor Porton  wrote:
> My extension does add additional links (which it is created to render
> in addition to the normal [[...]] links) to the pagelinks table for the
> edited page every even edit and erroneously removes them back every odd
> edit.
>
> Please help to debug this silly behavior:
>
> https://gerrit.wikimedia.org/r/#/c/330816/1 is the patch which half-
> works.
>
> My extension:
> https://www.mediawiki.org/wiki/Extension:NamespacePopups
>
> It uses my another extension:
> https://www.mediawiki.org/wiki/Extension:PagePopups
>
> This is very important for me.
>
> Well, maybe some day we'll use these extensions in Wikipedia, who
> knows.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Discussion Platform

2016-12-10 Thread bawolff
On Sat, Dec 10, 2016 at 7:44 AM, MAYANK JINDAL  wrote:
> Hi,
> The problem is in IRC that it doesn't allow to see previous messages.
> I mean it only shows messages as long as you are logged in.
>

This isn't really a new problem. Some people use bouncers [1] to solve
this. Other people just don't care about messages when they are not
online (If someone really wants to talk to you asynchronously they can
wait until you are back online, or leave a message on your talk page,
or send you an email, or use MEMOSERV, etc). On most Wikimedia tech
channels you can also find old messages at
http://bots.wmflabs.org/~wm-bot/logs/

[1] https://en.wikipedia.org/wiki/ZNC

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Introduction for Gsoc 2017

2016-12-10 Thread bawolff
>>
>
> In regards to 3:
>> 3.  What are the other things that happen apart from GSoC(In case if I wish
>> to contribute even after Gsoc. How is the  culture of this
>> community?
>
> You are welcome to contribute at any time. You do not need to be part
> of a program to do so. The other major outreach programs are GCI
> (happening right now) which is open to high school students, and
> Outreachy, which is similar to GSOC, but only open to Women.
>
> As far as "culture" goes, that's a rather broad question. Can you be
> more specific.
>
> In regards to 4,
>> 4. I am not facing a lot of difficulties in IRC. Sometimes I don't get the
>> reply, sometimes my nickname is unavailable if I get offline for some time
>> and it is not very friendly. Could somebody help me in IRC. I am using mac.
>> My nickname is *mayank_dev*
>
> Whether or not somebody replies to you can depend on a lot of factors,
> such as how specific your question is, if anyone knowledgeable is
> around (This can very significantly with timezone), etc. If your
> nickname is unavailable, I would suggest just chosing a different one.
> Possibly it is unavailable because you are still logged in somewhere
> (most likely), or maybe someone else also uses that nickname. If you
> really want to always have the same nickname always, see the freenode
> ChanServ docs about how to register a nickname.
>
> --
> Brian

Its been pointed out to me that my description of Outreachy was
incorrect. It is open to anyone who is in a group that is
"underrepresented", not just Women. Additionally, unlike GSOC, you do
not need to be a student to participate. For more information on this
program see https://www.mediawiki.org/wiki/Outreachy

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Introduction for Gsoc 2017

2016-12-10 Thread bawolff
On Sat, Dec 10, 2016 at 6:50 AM, MAYANK JINDAL  wrote:
> Hi ,
> Hope you are doing well.
> I want to discuss my preparation for Gsoc-2017 in Wikimedia. I am very new
> to this organization and I am facing a lot of difficulties in getting
> Wikimedia workflow.
> 1. I have done open source contributions in past and what I have seen is -
> They have their repositories on GitHub and issues related to that repo. For
> preparing to contribute, we just had to fork their repo and send a pull
> request after resolving the issue. But in Wikipedia, there is a lot of
> hassle(phabricator, gerrit and all).
> 2. For the preparation of GSoC, we need to choose an idea from the idea
> page. Can you please help me in choosing the project?
>I am doing android app developement since 3 years. So I think android
> related tasks would be easy for me to get started.
>Could Someone help me for getting started in android?
> 3.  What are the other things that happen apart from GSoC(In case if I wish
> to contribute even after Gsoc. How is the  culture of this
> community?
> 4. I am not facing a lot of difficulties in IRC. Sometimes I don't get the
> reply, sometimes my nickname is unavailable if I get offline for some time
> and it is not very friendly. Could somebody help me in IRC. I am using mac.
> My nickname is *mayank_dev*
>
> I apologize for my long email. But I am facing a lot of difficulties in
> the community.
>
> Thanks.
>

In regards to 3:
> 3.  What are the other things that happen apart from GSoC(In case if I wish
> to contribute even after Gsoc. How is the  culture of this
> community?

You are welcome to contribute at any time. You do not need to be part
of a program to do so. The other major outreach programs are GCI
(happening right now) which is open to high school students, and
Outreachy, which is similar to GSOC, but only open to Women.

As far as "culture" goes, that's a rather broad question. Can you be
more specific.

In regards to 4,
> 4. I am not facing a lot of difficulties in IRC. Sometimes I don't get the
> reply, sometimes my nickname is unavailable if I get offline for some time
> and it is not very friendly. Could somebody help me in IRC. I am using mac.
> My nickname is *mayank_dev*

Whether or not somebody replies to you can depend on a lot of factors,
such as how specific your question is, if anyone knowledgeable is
around (This can very significantly with timezone), etc. If your
nickname is unavailable, I would suggest just chosing a different one.
Possibly it is unavailable because you are still logged in somewhere
(most likely), or maybe someone else also uses that nickname. If you
really want to always have the same nickname always, see the freenode
ChanServ docs about how to register a nickname.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Bash API edit code examples needed, please

2016-11-21 Thread bawolff
On Tue, Nov 22, 2016 at 3:41 AM, Danny B. <wikipedia.dann...@email.cz> wrote:
> Hi,
>
> a friend of mine is looking for examples of Bash code to edit MediaWiki via
> API (as far as I understood, login, logout, get page text, put text to the
> page (preferrably with custom tag); nothing more (unless needed)) for his
> project of sharing some open data.
>
> He has some other Bash scripts which generate some text which he needs to
> save on wiki. Such text can be provided to the API edit part either via
> pipe, as a variable content or saved in file (that's the preferred order of
> possibilities).
>
> If you have or know about any examples, please let me know (feel free to
> reply off-list with attachments), so I can forward it to him.
>
>
> Thanks for any help in advance.
>
>
> Kind regards
>
>
> Danny B.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Ok. So I'm not sure if I should post this or not, but... Once upon a
time, back when I didn't really know how to program, I wrote a bot in
bash for updating an onwiki template for what articles are trending
currently on enwikinews. Code is absolutely horrible and not something
you should follow, but I suppose it is an example. There's source code
at https://en.wikinews.org/wiki/User:Bawolff_bot (API might have
changed slightly since that was written).

--
Bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Call graph creator applied to MediaWiki and extensions

2016-11-11 Thread bawolff
Hi,

I think we used to have callgraphs on svn.wikimedia.org (What is
modern day doc.wikimedia.org), but they got disabled ~2012 (See commit
1ce4aab13ce). However, If you're interested in this sort of thing you
could also try generating the doxygen call graphs yourself by setting
CALL_GRAPH = YES (and maybe CALLER_GRAPH = YES) in
maintinance/Doxyfile  and running make doc in maintenance (provided
you had doxygen and graph viz installed).

--
Bawolff

On Wed, Nov 9, 2016 at 11:43 AM, Seb35 <seb35wikipe...@gmail.com> wrote:
> Hello,
>
> I wanted to share a recent work I’ve done to better visualise the
> dependencies between classes, with the call graph [1].
>
> For the extension I am currently developping (MediaWikiFarm [2]), I wanted
> to obtain the call graph. First I did it by hand, but then I found a program
> doing the job, phpCallGraph [3], but unfortunately it was unmaintained since
> 2009 (PHP 5.2 era). I did maintenance and it partly works [4], I still have
> issues with the autoloader.
>
> Here are some preliminary results I got: (these are packages without much
> external dependencies due to autoloader issues)
> * Extension:MediaWikiFarm
>
> https://www.mediawiki.org/wiki/File:Call_graph_of_extension_MediaWikiFarm.svg
> * Extension:Math
> https://www.mediawiki.org/wiki/File:Call_graph_of_extension_Math.svg
> * Extension:CentralAuth
>
> https://www.mediawiki.org/wiki/File:Call_graph_of_extension_CentralAuth.svg
> * directory 'includes' of MediaWiki without subdirectories (too big)
>
> https://www.mediawiki.org/wiki/File:Call_graph_of_MediaWiki%E2%80%99s_directory_includes.svg
>
> I plan to improve this software (see README on GitHub), please say if you if
> you are interested by this type of graph and/or interested in contributing.
>
> Thanks,
> Seb35
>
> [1] https://en.wikipedia.org/wiki/Call_graph (the overapproximation of the
> static call graph to be precise)
> [2] https://www.mediawiki.org/wiki/Extension:MediaWikiFarm
> [3] http://phpcallgraph.sourceforge.net
> [4] https://github.com/Seb35/phpcallgraph
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Engaging student devs with another outreach event like GSoC, GCI

2016-11-07 Thread bawolff
On Thu, Nov 3, 2016 at 2:29 PM, Yaron Koren <ya...@wikiworks.com> wrote:
> Hi,
>
> I hit across this idea in the recent GSoC Mentors summit, and in the
>> discussion with Srishti and Sumit on the reducing usability and scope of
>> GSoC/Outreachy projects[1] among the years.
>
>
>> *The problem*
>> Students show up one or two weeks before GSoC or Outreachy, and propose a
>> solution to existing ideas, and often end up completing it and leaving the
>> project. Due to this, there is a decline in student-proposed ideas as well,
>> given 1-2 weeks is not enough to understand Wikimedia from any direction.
>
>
> I didn't really understand this. You seem to be talking about some or all
> of the following issues:
>
> 1) Fewer students doing projects for the Wikimedia Foundation as part of
> the Google Summer of Code and, I guess, Outreachy, than in previous
> years - 2013
> being the high point.
>
> 2) Students doing projects that are less useful than in previous years.
>
> 3) Students not staying with the Wikimedia/MediaWiki after the conclusion
> of their project.
>
> 4) Students doing projects proposed by existing MediaWiki developers,
> rather than projects they proposed themselves.
>
> I see these as four unrelated issues, and actually I see only two of them
> as real issues: #2 I don't think is true (though I'm not sure if that's
> what you meant by "usability"), while #4 I don't see as an issue at all.
> Personally, I think only projects proposed by potential mentors should be
> considered at all, and that the documentation should state that clearly.
> I'm not aware of any GSoC projects where the student came up with the idea
> on their own and then executed on it successfully - with the exception of
> projects where the student is an established MediaWiki developer who
> happens to currently be in college, but that's obviously a special case.
> It's just not reasonable to expect that someone from outside the
> WMF/MediaWiki community would be able to come up with a project that (a)
> makes sense, (b) fits within the current development roadmap, and (c) is of
> the right scope for a GSoC/Outreachy project.
>
> More generally, I don't think there's anything less rewarding about doing a
> project that someone else came up with. In software development, as in most
> things, the difficult part - and the most rewarding part - is the
> execution, not the original idea. (There are various inspirational quotes
> to this effect.)
>
> That leaves #1 and #3 - fewer students participating, fewer students
> staying on afterwards. I think #1 is just a function of fewer potential
> mentors getting involved. Retaining students, on the other hand, is a real
> problem. I can think of various ways to try to improve this, though
> creating a new outreach/funding program seems extreme - it would take a lot
> of work, and you would presumably run into the same problem of a limited
> number of mentors. If there's money to pay for these kinds of things, why
> not just put more money into, say, hiring more developers from out of the
> GSoC pool? It's one idea.
>
> -Yaron
>

I actually disagree somewhat - I think it can be very rewarding to fix
a problem that you yourself have, as opposed to fixing somebody else's
problem. This is a traditional ideology about open source - that it is
all about scratching your own itch.

Although arguably most gsoc students coming up with their own projects
aren't actually scratching their own itch but desperately trying to
come up with an idea. However, if someone happens to be a preexisting
user of MediaWiki, and finds something they find super annoying, I
think that can make for a very good project.

As for #2 - I think in recent years there has been more effort to make
sure projects are scoped appropriately. This makes it more likely for
projects to be finished, at the expense of making the projects perhaps
less "useful" than in older years. I think choosing the lower risk
lower reward path is entirely appropriate for a program like gsoc, so
I don't think this is a bad thing.

As for users sticking around - I think the communication around gsoc
has shifted from "Here's some money so you can work on MediaWiki
without starving to death" to "Here's a little money and a job so you
can put something cool on your resume". If students are being
attracted to the program principally to have something on their resume
or for the money (To be clear, I'm not saying there is anything wrong
with that), its not surprising that they leave afterwards when the
money goes away. If we want to attract people in the long term, we
should probably come up with a better carrot.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What happened to RobLa?

2016-11-03 Thread bawolff
Robla obviously deserves his privacy if he wants that. However I think
its rather natural for people to wonder "What happened" when an
employee who is as much in the public spotlight as robla is, suddenly
and unexpectedly leaves.

Obviously just because it is "natural" doesn't mean we deserve
answers. I think Wes's short email was sufficient in terms of
answering the what happened question. However, simply putting a global
lock on someone's account (Which was the state of things before this
thread) is not good communication and bound to raise eyebrows.

There is also the more pragmatic question of how Robla's
responsibilities will be transferred. Robla did a lot for MediaWiki,
and transitions are scary. Transitions with no information are even
scarier. How will Robla's initiatives be carried forward? Is the
architecture committee mature enough to continue on purely its own
steam without Robla pushing it forward? How does this affect the
"vision" for MediaWiki (emphasis on MediaWiki, not Wikimedia)? Will
development of Wikimedia's technology further devolve into independent
fiefdoms without Robla balancing the architecture needs of different
groups? All these concerns are in my mind, and I think the minds of
others. It would be nice to get some reassurance about what is going
to happen next.

--
Brian
[To be 100% clear, posting with my volunteer hat firmly on]

On Thu, Nov 3, 2016 at 4:17 PM, Jonathan Morgan  wrote:
> +1 Petr. This thread makes me sad and uncomfortable.
>
> - J
>
> On Thu, Nov 3, 2016 at 9:03 AM, Toby Negrin  wrote:
>
>> Thank you Petr. Well said!
>> -Toby
>>
>> On Thu, Nov 3, 2016 at 2:39 AM, Petr Bena  wrote:
>>
>> > Hello,
>> >
>> > I don't like to sound like a bad guy, but I really find it quite
>> > inappropriate to discuss this sort of stuff on a public mailing list.
>> > This really is a personal matter. If he didn't post any dramatical
>> > announcement perhaps he wanted to "leave quietly"?
>> >
>> > You know, people sometimes do this sort of stuff for very personal
>> > reasons, they just don't want to discuss with public, and who knows,
>> > maybe he would return back one day? He wouldn't be first to do that :)
>> > Or maybe he is going to stick around as a volunteer? Let's be
>> > optimistic!
>> >
>> > If you really want to know the details, you can always ask him
>> > directly. Fortunately he is not dead, so no need to mourn him. As an
>> > employee of WMF, he was always very friendly and extremely useful, so
>> > indeed I /do/ hope he will stay with us, at least as a volunteer.
>> >
>> > On Thu, Nov 3, 2016 at 10:16 AM, Tilman Bayer 
>> > wrote:
>> > > On Tue, Nov 1, 2016 at 3:54 AM, Federico Leva (Nemo) <
>> nemow...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > >> As a reminder on goodbyes, WMF HR used to announce everyone's last day
>> > at
>> > >> https://identi.ca/wikimediaatwork (can't take more than a couple
>> > >> minutes); then the announcements became monthly, then quarterly; then
>> we
>> > >> lost this courtesy as well https://meta.wikimedia.org/wik
>> > >> i/Talk:Wikimedia_Foundation_reports#Staff .
>> > >
>> > >
>> > > As already noted higher up on that talk page, that's a misconception.
>> > > Timely updates now happen on the WMF wiki instead, where RobLa's
>> > departure
>> > > had already been recorded by HR before you sent this email:
>> > > https://wikimediafoundation.org/w/index.php?title=Template:Staff_and_
>> > > contractors=history
>> > >
>> > >
>> > >>
>> > >
>> > >
>> > > --
>> > > Tilman Bayer
>> > > Senior Analyst
>> > > Wikimedia Foundation
>> > > IRC (Freenode): HaeB
>> > > ___
>> > > Wikitech-l mailing list
>> > > Wikitech-l@lists.wikimedia.org
>> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> >
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> >
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>
>
>
> --
> Jonathan T. Morgan
> Senior Design Researcher
> Wikimedia Foundation
> User:Jmorgan (WMF) 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code of Conduct

2016-09-29 Thread bawolff
On Thu, Sep 29, 2016 at 3:23 PM, Steinsplitter Wiki
<steinsplitter-w...@live.com> wrote:
>
>
>>>To reach more people like you, what would be the best place to post
> messages so you'd see them?
>
>
>
>
> Positing it at the village pumpes of the local project (similar to the tech 
> news notifications), for example :-)
> Or using limited CN banners (similar to the community survey banners).
>

Honestly, hasn't it been enough already with the code of conduct. The
number of announcements related to it has been staggering.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Opening up MediaWiki dev summit in January?

2016-09-01 Thread bawolff
>
> * In a big session on services-oriented architectures, a lot of time was
> spent theorizing about what small wikis who do their hosting on
> shared-hosting services do, and whether various solutions we were proposing
> would make it easier or harder for these non-WMF users of mediawiki.  *But
> none of these users were at the summit.*  So no decisions could ultimately
> be made, as the necessary affected parties were not present.
>

I don't think that would change, regardless of what we do. Even if we
have more users at the summit, its not going to be a representative
sample of every type of user we have. I don't think its reasonable to
assume that just because a usecase isn't represented at the summit
that it doesn't exist. Anyone taking the time out of their day to
travel all the way to a MediaWiki event, is probably a power user, and
thus this will bias the representation of users at the summit.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Opening up MediaWiki dev summit in January?

2016-09-01 Thread bawolff
>
> Yes, I think we *should* provide a focus for the event, and that the focus
> should be on users, use cases, and what we as developers need to do to
> achieve those things.
>
> In my opinion we haven't had a strong focus to the event in the past, and
> it's limited what we accomplish there to largely making a set of technical
> presentations and having a few discussions that either don't produce a
> decision or don't have much affect on what actually happens after the
> summit.
>
> (I'd be very interested also in some feedback on things that *have* worked
> well at MWDS in the past, as I'd love to encourage anything that has been
> productive! But I think we've not been successful in an architectural focus
> so far.)
>
> -- brion

[Sorry about last email. accidentally hit send]

Yes, we certainly do have issues with follow-through on summit decisions.

For me personally, I've found the dev summits mostly useful as a
community building type thing (For the MediaWiki developer community).
As a remotee (Or at other various points in time, as a volunteer), its
rare I actually see everyone in real life. The dev summit provides a
venue to actually interact with everyone. While it may not actually be
the best at resolving architectural issues, I feel like it helps me
understand where everyone is coming from.

In particular, I find that the dev summit is more effective for this
purpose than hackathons, as the unstructured nature of hackathons tend
to get people clumping in groups that already know each other. The dev
summit on the other hand better provides for cross-pollination of
ideas in my experience. (Don't get me wrong, I love hackathons too,
just for different reasons).

However, use-cases and users is why we're here, so I'm certainly not
opposed to that focus. I just hope we continue to retain this as an
event that's more talky and less hacky, as I feel that's where a lot
of the uniqueness of the event came from.

One aspect of the first MediaWiki architecture summit that I really
liked but has been mostly lost, was inviting non-Wikimedia mediawiki
users. They're a group that has use-cases that we don't often hear
about, and provide a unique perspectives. Although I suppose its not
surprising that their involvement has kind of been lost. I would love
to see them come back, although I'm not exactly holding my breath for
that.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [ConfirmEdit][ReCaptcha] Call-to-Action: ReCaptcha module will be removed in the near future

2016-08-04 Thread bawolff
>
> Because this is a huge problem for existing third-party wikis (and because
> we don't have any usage statistics), I'm not sure, which plan we should
> choose. That's why I sent this e-mail out, to get (hopefully) some responses
> and opinions.
>
>

We do have stats. There are 308 people using ReCaptchaNoCaptcha (
https://wikiapiary.com/wiki/Extension:ReCaptchaNoCaptcha ). There are
about 165 people using plain ReCaptcha
(https://wikiapiary.com/wiki/Extension:ReCaptcha - note, exclude the
one's using version 0.1. That's a different extension with the same
name).

Obviously this misses some wikis, but should provide a ballpark
figure. I imagine most of the people still on ReCaptcha simply haven't
bothered or are unaware that they can update to the new one.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RESTBase JSONP output

2016-08-02 Thread bawolff
On Tue, Aug 2, 2016 at 7:32 PM, Toni Hermoso Pulido  wrote:
> Hello,
>
> I'm trying to include some output from
> https://wikimedia.org/api/rest_v1/?doc into a Wikipedia
> (xy.wikipedia.org) via a Javascript AJAX call.
>
> Is it possible to have a JSONP output? I have not found any
> documentation so far. Otherwise, would there be any other way (avoiding
> a backend part, of course)?
>
> Thanks,
>
> --
> Toni Hermoso Pulido
> http://www.cau.cat
> http://www.similis.cc
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

JSONP in general tends to be flaky. I strongly suggest you use CORS to
access apis, even for the APIs that have JSONP available.

--
Brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reload SQLight from MYSQL dump?

2016-08-02 Thread bawolff
On Tue, Aug 2, 2016 at 5:57 PM, Jefsey  wrote:
> At 02:44 02/08/2016, John wrote:
>>
>> For mass imports, use importDump.php - see
>> <http://www.mediawiki.org/wiki/Manual:Importing_XML_dumps>
>> for details.
>
>
> Dear John,
>
> Thank you for your response.  But I am dump myself and I am still lost.
>
> I am not sure I made my need clear enough. I had wikis under MySQL I dumped
> in XML. So I have n XML files to create n separate SQLite wikis. One file
> one SQLite wiki (n is around 35).
>
> When I try to read with an XML viewer these files, they tell me that some
> character in line 295 or so is wrong and they cannot read it, so I do not
> really know how they are structured.
>
> Therefore, my question is about how to create the n SQLwikis I need and
> import in each the pages which are in XML dumped from MySQL.
>
> Also, further on, I understand that the manual is reporting about MySQL
> dumps, not about SQLite dumps? Does that mean that I should consider their
> backup with SQLite tools rathere than mediwiki tools? Hence my question
> about a mediawiki SQLite section/mailing list?
>
> Sorry I am pretty new to this and need to take care of a lot of small
> Libre/Citizen/locally oriented wikis, with more to come. So I try to figure
> out the best way to manage all this 
>
> Thank you !
>
> jfc
>
>

Note, that there is an XML dump format used by MediaWiki, but there is
also an xml dump format used by mysqldump. Its unclear from your email
which one of those you have. The previous people in the thread are
assuming you're talking about a MediaWiki  xml dump file, but if you
are talking about a mysql XML dump file, things are probably going to
be much harder to convert to sqlite.

--
brian

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wgDBServer ipv6

2016-07-10 Thread bawolff
On Thu, Jul 7, 2016 at 8:35 PM, Trung Dinh <t...@fb.com> wrote:
> Hi everyone,
>
> I am trying to configure a mysql server for our wikimedia service. We are 
> using ipv6 of the mysql server.
> I am not sure how to configure $wgDBServer with an ipv6. I am able to 
> configure the server with ipv6.
> I tried to set:
>
>   1.  $wgDBServer = [IPAddress]:Portname
>   2.  $wgDBServer = IPAddress
>
> But none of the above works for me.
>
> Thanks very much in advance.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

First of all, keep in mind $wgDBserver is case sensitive. It needs to
be a lowercase s.

Google comes up with https://bugs.php.net/bug.php?id=67563 - With that
in mind, I would recommend using the DNS name of the mysql server
you're using instead of its IP address (Or if it doesn't have one,
create an entry in /etc/hosts for it). Failing that, try switching php
to use the mysqli driver if you're using the mysql driver.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] git.wikimedia.org (Gitblit) going away on June 29th, redirected to Phabricator

2016-06-29 Thread bawolff
On Tue, Jun 21, 2016 at 8:26 PM, MZMcBride <z...@mzmcbride.com> wrote:
> Platonides wrote:
>>I hear this with dismay. When I wanted to view the repository online in
>>the past, I always ended up heading to git.wikimedia.org, since I was
>>unable to *find* the repository at phabricator.
>>At most, phabricator showed a somewhat related diffusion commit.
>
> Phabricator site search includes repositories. I agree that parts of
> Phabricator, including the repository index at
> <https://phabricator.wikimedia.org/diffusion/>, can be annoying to find
> currently.

Phabricator site search is pretty useless all and all. 75% of the time
I can't even find the bugs I'm looking for, and its more effective to
go the the project's workboard and hit ctrl+F in my browser.

As for git, I almost never want to actually look at it online. But
when I do, I'll probably just end up going to the github mirror...

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Logging client JS exceptions to the server

2016-06-27 Thread bawolff
On Mon, Jun 27, 2016 at 6:22 AM, Joaquin Oltra Hernandez
<jhernan...@wikimedia.org> wrote:
> Hi,
>
> Are there any on-going efforts to log JavaScript errors to the server logs?
>
> Every once in a while we see very hard to reproduce errors in the console
> that we're not sure how often are actually happening, and that makes us
> think that there may be other errors happening that we're not coming across
> (or other people savvy enough to open the console and post a bug).
>
> There are open source projects like sentry we could inspect and adapt a
> client library for our purposes and log to EventLogging or somewhere else (
> https://docs.getsentry.com/hosted/clients/javascript/).
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

See https://phabricator.wikimedia.org/T106915

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] CII best practices (for security etc.)

2016-06-11 Thread bawolff
On Sat, Jun 11, 2016 at 9:12 AM, Federico Leva (Nemo)
<nemow...@gmail.com> wrote:
> Out of curiosity I've started filling
> https://bestpractices.coreinfrastructure.org/projects/201 for MediaWiki
> (core) in this Linux Foundation project on best practices for FLOSS.
>
> I got the feeling that the criteria still need to be exposed to a bigger
> variety of projects: they don't seem focused, for instance, for web
> applications, multilingual userbases and openness/transparency/community
> health.
>
> I was able to confirm 79 % of the criteria; I've had some troubles with:
> issue reporting SLA, test coverage evolution, static/dynamic analysis and
> cryptography. I suggest that the current status be documented on the
> relevant mediawiki.org pages, as other people may have the same questions.
> Hopefully it's then easy enough to update the "badges" (and if not, I see a
> big delete button).
>
> Nemo
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

For reference, I started
https://www.mediawiki.org/wiki/Core_Infrastructure_Initiative_Best_Practices_badge
to document items on their list we currently don't do.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security patch

2016-04-26 Thread bawolff
I've filed T133735 as a bug to formalize procedures for security
releases of non-mediawiki bundled wmf-maintained extensions.

On Tue, Apr 26, 2016 at 3:17 PM, bawolff <bawolff...@gmail.com> wrote:
> On Tue, Apr 26, 2016 at 3:08 PM, Ryan Lane <rlan...@gmail.com> wrote:
>> On Tue, Apr 26, 2016 at 12:01 PM, Alex Monk <kren...@gmail.com> wrote:
>>
>>> It's not an extension that gets bundled with MediaWiki releases.
>>>
>>>
>> That doesn't mean third parties aren't using it. When I say a release of
>> the extension, I mean give it a version number, increase the version
>> number, tag it in git, then tell people "ensure you are using version x or
>> greater of MobileFrontend".
>>
>> This is a pretty normal process that Wikimedia does well for other things.
>> I have a feeling this isn't going through a normal process...
>>
>
> I'm pretty sure that doing git tags in extensions for new versions is
> not normal procedure.
>
> I can't recall any extension ever doing that (Unless you mean the
> REL1_26 type tags).
>
> Which is not to say that I necessarily disagree with doing that
> procedure, I just think its unfair to call that the normal procedure,
> where I don't think that procedure has ever been used for extensions.
>
> Regardless of what procedures are decided as good practice for
> extensions, formalizing the procedures security releases of
> non-bundled extensions that are maintained by WMF would probably be a
> good idea.
>
> --
> -bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security patch

2016-04-26 Thread bawolff
On Tue, Apr 26, 2016 at 3:08 PM, Ryan Lane <rlan...@gmail.com> wrote:
> On Tue, Apr 26, 2016 at 12:01 PM, Alex Monk <kren...@gmail.com> wrote:
>
>> It's not an extension that gets bundled with MediaWiki releases.
>>
>>
> That doesn't mean third parties aren't using it. When I say a release of
> the extension, I mean give it a version number, increase the version
> number, tag it in git, then tell people "ensure you are using version x or
> greater of MobileFrontend".
>
> This is a pretty normal process that Wikimedia does well for other things.
> I have a feeling this isn't going through a normal process...
>

I'm pretty sure that doing git tags in extensions for new versions is
not normal procedure.

I can't recall any extension ever doing that (Unless you mean the
REL1_26 type tags).

Which is not to say that I necessarily disagree with doing that
procedure, I just think its unfair to call that the normal procedure,
where I don't think that procedure has ever been used for extensions.

Regardless of what procedures are decided as good practice for
extensions, formalizing the procedures security releases of
non-bundled extensions that are maintained by WMF would probably be a
good idea.

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security patch

2016-04-26 Thread bawolff
On Tue, Apr 26, 2016 at 2:44 PM, Jon Robson <jrob...@wikimedia.org> wrote:
> A security vulnerability has been discovered in MediaWiki setups which
> use MobileFrontend.
>
> Revisions who's visibility had been alerted were showing up in parts
> of the mobile UI.
>
> All projects in the Wikimedia cluster have been since patched but if
> you use this extension please be sure to apply the fix.
>
> Patch file and issue are documented on 
> https://phabricator.wikimedia.org/T133700
>
> Note there is some follow-up work to do which is tracked in:
> https://phabricator.wikimedia.org/T133722
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

For these sorts of things, could we include the extension in the
subject line? Otherwise some people might think its a general
mediawiki security issue.

Thanks,
--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Best practices for read/write vs read-only requests, and our multi-DC future

2016-04-21 Thread bawolff
On Thu, Apr 21, 2016 at 1:45 AM, Brion Vibber <bvib...@wikimedia.org> wrote:
> Over in TimedMediaHandler extension, we've had a number of cases where old
> code did things that were convenient in terms of squishing read-write
> operations into data getters, that got removed due to problems with long
> running transactions or needing to refactor things to support
> future-facing multi-DC work where we want requests to be able to more
> reliably distinguish between read-only and read-write. And we sometimes
> want to put some of those clever hacks back and add more. ;)
>
> For instance in https://gerrit.wikimedia.org/r/284368 we'd like to remove
> transcode derivative files of types/resolutions that have been disabled
> automatically when we come across them. But I'm a bit unsure it's safe to
> do so.
>
> Note that we could fire off a job queue background task to do the actual
> removal... But is it also safe to do that on a read-only request?
> https://www.mediawiki.org/wiki/Requests_for_comment/Master_%26_slave_datacenter_strategy_for_MediaWiki
> seems to indicate job queueing will be safe, but would like to confirm
> that. :)
>
> Similarly in https://gerrit.wikimedia.org/r/#/c/284269/ we may wish to
> trigger missing transcodes to run on demand, similarly. The actual re
> encoding happens in a background job, but we have to fire it off, and we
> have to record that we fired it off so we don't duplicate it...
>
> (This would require a second queue to do the high-priority state table
> update and queue the actual transcoding job; we can't put them in one queue
> because a backup of transcode jobs would prevent the high priority job from
> running in a timely fashion.)
>
> A best practices document on future-proofing for multi DC would be pretty
> awesome! Maybe factor out some of the stuff from the RfC into a nice dev
> doc page...
>
> -- brion
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

When doing something like that from a read request, there's also the
problem for a popular page that there might be lots of views (maybe
thousands if the queue is a little backed up) before the job is
processed. So if the view triggers the job, and it will only stop
triggering inserting the job after the job has been executed, this
might cause a large number of useless jobs to be en-queued until one
of them is finally executed.

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unique Devices data available on API

2016-04-19 Thread bawolff
> it does not include any
> cookie by which your browser history can be tracked [3].

Umm, it involves a cookie, which tracks whether you have previously
browsed the site. While I applaud the analytics team in using such a
privacy friendly method, I'm not sure its an entirely truthful
statement to say that it does not involve a cookie in which your
browser history can be tracked. The date you've visited previously
sounds like a part of your browser history to me.

To be clear, this is not meant to be a criticism. I think the approach
that is being taken is really great.

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Kunal (User:Legoktm) moving to Parsing Team

2016-04-08 Thread bawolff
On Fri, Apr 8, 2016 at 6:13 AM, Ricordisamoa
<ricordisa...@openmailbox.org> wrote:
> I wish there were a Legoktm Team within the Legoktm Department

Hopefully come may 11, we will have a strong voice on the board to
help push forward the Legoktm team within the Legoktm department of
the Legoktm foundation :D

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployment of GeoGebra MediaWiki Extension

2016-03-02 Thread bawolff
> Hope this is ok. If not, I will ask the GeoGebra team for help.

See https://www.mediawiki.org/wiki/Manual:Coding_conventions for
details. A lot of this is super-ficial stuff, but having the code look
like the rest of MediaWiki can really make it easier for us to figure
out what's going on at a glance.

>
>> *Javascript was loaded via resource loader (external js libraries
>> don't have to follow MW coding conventions, but it would help if any
>> js specific only to the extension did).
> The extension uses $out->addScript(). Are this MW coding conventions?

Since MediaWiki 1.17, we now prefer scripts to be loaded using the
ResourceLoader mechanism instead of $out->addScript(). See
https://www.mediawiki.org/wiki/ResourceLoader

>> All loaded javascript should
>> be included in the extension (Loading js hosted on other sites is not
>> allowed on Wikipedia in order to protect user privacy)
> That is very hard. I have no access to the GeoGebra JS, although it
> is basically open source. This is in contradiction to the concepts
> of GeoGebra - load its JavaScript from geogebra.org to get always
> the newest version.

This is definitely a hard requirement. We need all code loaded by
Wikipedia and friends to be in in the repo, to ensure that other sites
don't collect inappropriate analytics (violating our privacy policy),
or deliver malicious content. Well I'm sure that GeoGebra would not do
either of those things, the general principle is that we want to be
sure to be in control of all code, so that we know for sure we would
never have to worry about that sort of thing.

>> *Given that (Assuming I'm understanding correctly) this is basically
>> for viewing a fixed file (As opposed to something that people edit in
>> their browser), it would probably be better to implement this as a
>> MediaHandler extension, as opposed to a tag extension.
> I will RTFM about MediaHandler extensions...

The manual for MediaHandler extensions is kind of non-existent
(sorry). But basically they are registered with the $wgMediaHandlers
variable and subclass MediaHandler. PagedTiffHandler, PDFHandler, and
TimedMediaHandler extensions are all examples (TimedMediaHandler isn't
the best example as the code is kind of messy, but its one of the few
that involves loading custom js).

>> I doubt that
>> embedding base64 encoded files directly into pages is going to be ok.
> Use of base64 encoded files is only for legacy reasons.
> You are able to embed a GeoGebra drawing from http://tube.geogebra.org/
> by passing the file ID as a parameter. Example (German):
> http://wikis.zum.de/wikihilfe/GeoGebra/GeoGebraTube_einbinden
> Syntax is simple and user friendly:
> 

Probably we'd want people to be able to upload GeoGebra files to the
wiki, and then include them on pages with code like
[[File:Foo.ggb|964px|version=4.2]]. I'm unsure how people would feel
about having a separate repository of geogebra files.


>I am not sure what you mean by "some esoteric visualisation template".
>Which wiki do you mean by "a Wikimedia wiki"? Wikipedia?

Wikimedia wiki = Any wiki operated by the Wikimedia foundation. The
most relevant to your case is either the Wikipedias or the Wikibooks
(We generally consider each language to also be a separate wiki, so
english Wikipedia is a separate wiki from German Wikipedia, etc). The
full list is at https://meta.wikimedia.org/wiki/Special:SiteMatrix

>Which test wiki should I use? I have installed two test wikis to test
>the extension with different versions of MediaWiki software, but
>there are only "proof of concept" examples. Is it possible to get a
>kind of clone of Wikipedia with my extension installed, using
>"Tool Labs" or "Wikimedia Labs", to play around and replace some
>static geometric drawings by dynamic GeoGebra drawings?

Wikimedia labs is basically a hosting environment for things related
to Wikimedia. Wikipedia is really big, so you can't really setup a
full clone there (Or at least, its just as difficult there as anywhere
else). You can set up a wiki though, and import a couple pages from
Wikipedia so you have some examples for demos.

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployment of GeoGebra MediaWiki Extension

2016-02-27 Thread bawolff
On Sat, Feb 27, 2016 at 4:28 PM, Rudolf Großmann <r...@gmx.de> wrote:
> Brian Wolff  gmail.com> writes:
>>
>> Hi,
>>
>> Can you clarify what you mean by deployed with mediawiki? Do you mean
>> deployed on a wikimedia project (e.g. Wikipedia)? If so, which ones? Or all
>> of them? Or do you mean have it be included in the default mediawiki
>> tarball release?
>>
>> Thanks,
>> Bawolff
>
> Hi,
>
> The GeoGebra team and I would be glad to be able to enhance Wikipedia by
> GeoGebra drawings. (I avoid the word "applet", because Java isn't used any
> more.) I though a way to achieve this is to get the GeoGebra extension
> deployed with the MediaWiki software, to be more precise, the
> software you get at https://www.mediawiki.org/wiki/Download - what you call
> "default mediawiki tarball release", I think.
>
> Please let me know if there is a better way. You mentioned the
> "wikimedia project Wikipedia". Will it be easier just to get the extension
> installed at Wikipedia (e.g. like it is at wikis.zum.de) and not bothering
> with MediaWiki?
>
> Rudi


The extensions that are included with
https://www.mediawiki.org/wiki/Download have no bearing on which
extensions are used on Wikipedia. We don't have a fixed process for
deciding which extensions should be included with the MediaWiki
tarball. So far we've basically included just the extensions that are
both super-popular and super-general. I'm not sure if GeoGerba would
be a good choice for the MediaWiki tarball, since its somewhat of a
specific use case, that a great many wikis do not have (But that's
just my opinion).

For Wikipedia in general:
Getting an extension deployed on Wikipedia can be a challenging
process, a process that often is very frustrating. Usually it has to
pass a code review check, and a security check (among other things).
Even getting people to do those reviews can be challenging, (There's
only a small number of people who can do deployment reviews, they are
very busy people, and deployment reviews are generally the sort of
thing that takes a reviewer a long time to do. Especially in this
case, it involves what is probably a very large javascript library).

In order to get it deployed on Wikipedia, it would probably help (This
is just a stage 1 list of things, there would be other things to do
after this. I didn't even look at any of the js code):
*The php code followed MediaWiki core coding conventions
*Javascript was loaded via resource loader (external js libraries
don't have to follow MW coding conventions, but it would help if any
js specific only to the extension did). All loaded javascript should
be included in the extension (Loading js hosted on other sites is not
allowed on Wikipedia in order to protect user privacy)
*Given that (Assuming I'm understanding correctly) this is basically
for viewing a fixed file (As opposed to something that people edit in
their browser), it would probably be better to implement this as a
MediaHandler extension, as opposed to a tag extension. I doubt that
embedding base64 encoded files directly into pages is going to be ok.

Ultimately, you'd also have to get Wikipedia editors to agree that the
extension would be useful (Getting Wikipedia editors to agree on
anything is harder than it looks)

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mass migration to new syntax - PRO or CON?

2016-02-12 Thread bawolff
> Last thoughts on the thread, I got bigger fish to fry than array syntax
> sugar :D
>
> -Chad

+1 to that.

--
bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   3   >