Re: [Wikitech-l] Wikipedia Issue

2013-11-14 Thread Nathan
I get an error, and beneath it there is a message that says Your cache
administrator is: nobody Looks like there is a bug on it that's 4 or 5
years old ;)


On Thu, Nov 14, 2013 at 2:10 PM, Nathan Larson
nathanlarson3...@gmail.comwrote:

 On Thu, Nov 14, 2013 at 2:05 PM, Derric Atzrott 
 datzr...@alizeepathology.com wrote:

  The issue seems to be confined to Wikipedia.  I am not seeing it affect
  Mediawiki.org
 
  Additionally the issue seems to be affecting more than just the English
  language Wikipedia.  The lojban language Wikipedia is also experiencing
  difficulties.
 
  Thank you,
  Derric Atzrott


 I'm getting it at MediaWiki.org intermittently too. Just lost an edit, in
 fact (my cache didn't save it for some reason; maybe it expired).
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor exemption process (was: Re: Jake requests enabling access and edit access to Wikipedia via TOR)

2014-01-17 Thread Nathan
On Fri, Jan 17, 2014 at 1:21 PM, Erik Moeller e...@wikimedia.org wrote:

 On Mon, Jan 13, 2014 at 9:10 AM, Chris Steipp cste...@wikimedia.org
 wrote:
  To satisfy Applebaum's request, there needs to be a mechanism whereby
  someone can edit even if *all of their communications with Wikipedia,
  including the initial contact* are coming over Tor or equivalent.
  Blinded, costly-to-create handles (minted by Wikipedia itself) are one
  possible way to achieve that; if there are concrete reasons why that
  will not work for Wikipedia, the people designing these schemes would
  like to know about them.

  This should be possible, according to
 https://meta.wikimedia.org/wiki/NOP,
  which Nemo also posted. The user sends an email to the stewards (using
 tor
  to access email service of their choice). Account is created, and user
 can
  edit Wikimedia wikis. Or is there still a step that is missing?

 I tested the existing process by creating a new riseup.net email
 account via Tor, then requesting account creation and a global
 exemption via stewa...@wikimedia.org. My account creation request was
 granted, but for exemption purposes, I was requested to go through the
 process for any specific wiki I want to edit. In fact, the account was
 created on Meta, but not exempted there.

 The reason I gave is as follows:

 My reason for editing through Tor is that I would like to write about
 sensitive issues (e.g. government surveillance practices) and prefer not
 to be identified when doing so. I have some prior editing experience, but
 would rather not disclose further information about it to avoid any
 correlation of identities.

 This seems like a valid reason for a global exemption to me, so I'm
 not sure the current global policy is sufficient.


I use an anonymous encrypted VPN when accessing the Internet from home, and
found myself unable to edit on en.wp. I requested IPBE
https://en.wikipedia.org/w/index.php?title=User_talk:Nathandiff=572887054oldid=568388651
and
was initially denied because I was able to turn off the VPN to edit.
Exemption was only granted because an administrator familiar with me was
watching my talkpage and stepped in.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Free fonts and Windows users

2014-04-09 Thread Nathan
On Wed, Apr 9, 2014 at 1:05 AM, Risker risker...@gmail.com wrote:
snip


 There is a bit of amnesia about the fact that almost all editors are also
 readers and regular users of the projects we create, and those editors have
 been encouraged since Day One to inform developers of any technical
 problems with the site; they're the canaries in the coal mine.  And for
 anyone who's been editing more than 3-4 years (an ever-increasing
 percentage of the editing community), software issues were things they
 had to pretty much solve themselves within the volunteer community. For
 years, most developers were volunteers too, and had close links to the
 editing community.  At least a good portion of you remember those days -
 because you used to be those volunteer developers that community members
 would hit up to fix things, and you used to watch the village pumps to
 figure out what else needed to be fixed. Until very recently, it was almost
 unheard-of for community members to be told that problematic things were
 going to stay that way because of a decision made by a small number of
 people in an office somewhere.  When most developers were clearly
 participating as community members, they behaved as though they were, at
 least to a significant extent, accountable to both the editing and
 Mediawiki communities; I'm not sure that sense of accountability exists
 anymore.  Now, I don't think anyone begrudges the many talented developers
 who started off within the community having taken the opportunity to move
 on to paid positions at the WMF, and I think on the whole the big-picture
 community is overall very happy with the majority of changes that have come
 with the increased professionalization of the site engineering and
 operations.  But this is a major change in culture, and the gulf between
 volunteers (either developer or editor) and WMF staff is widening
 noticeably as time progresses.  I could tell who was a volunteer and who
 was staff from the way their posts were responded to in this thread; I
 doubt that would have been the case even two years ago.

 snip

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



Which gulf is growing more quickly - between the WMF staff and volunteers,
or between the veteran editing population and the typical reader? I won't
argue that there is some distance, and a degree of conflict, between the
goals and priorities of the staff and many veteran volunteers. But this
distance hasn't occurred organically. It's been introduced consciously, and
mentioned on a number of occasions, by WMF leaders like Sue and Erik. They
have often made the point that the WMF has to consider first the hundreds
of millions of readers who compose our intended audience.

No one has said, that I've seen, that complaints and problems from the
editing community should be ignored. But I think Steve and Jon are
struggling with how to react to complaints without knowing how
representative they are of the reader experience. If complaints are
presented by a tiny percentage of the editing community, how many should
they assume are encountering the problem but not reporting it? If
readership numbers don't change, and people who login to complain are
definitionally lumped into editors, how do we assess the impact on
readers and whether or not many readers are having difficulty?

These aren't easy questions to answer. Many people complaining about this
change, and others, seem to make different automatic assumptions about
relevance and impact than the WMF staff, or they don't consider it at all.
To be fair to the engineering staff, we should keep in mind that they have
to relate complaints from developers and editors to the experience of
hundreds of millions of readers and make decisions principally (but not
wholly) based on the latter, not the former.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Another Wikipedia design concept

2014-04-15 Thread Nathan
In the comment thread at the bottom someone gave him a heads up about the
fonts controversy, hopefully he doesn't get totally discouraged from
MediaWiki design studies after reading it ;)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-09 Thread Nathan
On Mon, Jun 9, 2014 at 5:30 AM, Martijn Hoekstra martijnhoeks...@gmail.com
wrote:



 That's precisely my point. Because current talk page discussions are - on
 the software level - unstructured, it allows social conventions to do
 everything you want it to do structure wise, and to invent new uses as we
 go. The domain model is just that complicated that we found that we need
 that power in implementation (even if we all(tm)[citation needed] agree
 that the current discussion form is pretty horrible UI wise). If you're
 going to force a software structure on in it will most likely not be as
 powerful[1] as what there is now and represents a trade off without us
 having a clear sight of what we will lose, even if we can have a somewhat
 clear picture of what we will gain: easier use and navigatability for the
 kinds of discussions we do support. Discussing a trade off where only one
 side is known is hard.

 [1] i.e. a tree structure is far less powerful than what we have now to
 approximate the domain, a dag with dividable nodes probably comes closer,
 and is already fiendishly complicated to pull off on a UI level. And then I
 haven't even gone in to the current practices of taking a comment back by
 striking it, which means that nodes aren't only arbitrarily and multiply
 dividable, but also mutable over time in a linear(?) history? 'sblood man,
 discussions are complicated.

 -- Martijn


The assumption I think you are making here is that the goal is to get a
discussion system with maximum functionality and flexibility. I think that
is an interesting goal as a technical exercise, but I'm not sure I agree
that this is a valuable goal for Wikimedia (although it may be for
MediaWiki). The problem we have is one of non-technical users put off by
excessively technical (and unstructured) means of communication.

So I think a much better goal, and one that I know others share and has
been the focus of development work, is to adopt a system of communication
that is easy and intuitive for as many participants as possible. This may
mean the problem of inventing a better mousetrap needs to be set aside in
favor of determining what mechanism is the most familiar and easy for the
majority of Wikimedia users. If that leads us to adopting something similar
to vBulletin or another threading w/quotes discussion system, then so be
it. This is a little part of what I think Risker was getting at; the
technical side is focused on creating neat new products and features, but
not always necessarily solving the problems that we actually have.

As a general principle, I see this kind of thinking come up a lot in
Wikimedia settings - the community is deeply committed to a very innovative
project, which leads to the impulse to be innovative in many ways. But the
core innovation is the wiki and the mission; it's not necessary to invent
the best-in-class [governance system, donation system, discussion
mechanism, feedback tool, etc.], and in many cases the distraction and
uncertainty of trying to innovative in multiple channels presents a
liability.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] First _draft_ goals for WMF engineering/product

2014-06-23 Thread Nathan
On Mon, Jun 9, 2014 at 9:24 PM, Erik Moeller e...@wikimedia.org wrote:

 Hi all,

 We've got the first DRAFT (sorry for shouting, but can't hurt to
 emphasize :)) of the annual goals for the engineering/product
 department up on mediawiki.org. We're now mid-point in the process,
 and will finalize through June.

 https://www.mediawiki.org/wiki/Wikimedia_Engineering/2014-15_Goals

 Note that at this point in the process, teams have flagged
 inter-dependencies, but they've not necessarily been taken into
 account across the board, i.e. team A may say We depend on X from
 team B and team B may not have sufficiently accounted for X in its
 goals. :P Identifying common themes, shared dependencies, and
 counteracting silo tendencies is the main focus of the coming weeks.
 We may also add whole new sections for cross-functional efforts not
 currently reflected (e.g. UX standardization). Site performance will
 likely get its own section as well.

 My own focus will be on fleshing out the overall narrative, aligning
 around organization-wide objectives, and helping to manage scope.

 As far as quantitative targets are concerned, we will aim to set them
 where we have solid baselines and some prior experience to work with
 (a good example is Wikipedia Zero, where we now have lots of data to
 build targets from). Otherwise, though, our goal should be to _obtain_
 metrics that we want to track and build targets from. This, in itself,
 is a goal that needs to be reflected, including expectations e.g. from
 Analytics.

 Like last year, these goals won't be set in stone. At least on a
 quarterly basis, we'll update them to reflect what we're learning.
 Some areas (e.g. scary new features like Flow) are more likely to be
 significantly revised than others.

 With this in mind: Please leave any comments/questions on the talk
 page (not here). Collectively we're smarter than on our own, so we do
 appreciate honest feedback:

 - What are our blind spots? Obvious, really high priority things we're
 not paying sufficient attention to?

 - Where are we taking on too much? Which projects/goals make no sense
 to you and require a stronger rationale, if they're to be undertaken at
 all?

 - Which projects are a Big Deal from a community perspective, or from
 an architecture perspective, and need to be carefully coordinated?

 These are all conversations we'll have in coming weeks, but public
 feedback is very helpful and may trigger conversations that otherwise
 wouldn't happen.

 Please also help to carry this conversation into the wikis in coming
 weeks. Again, this won't be the only opportunity to influence, and
 I'll be thinking more about how the quarterly review process can also
 account for community feedback.

 Warmly,

 Erik



Hi Erik,

Can you describe how specific engineering projects are helping to address
the gender gap? Do you typically review user-facing projects with the
gender gap specifically in mind? I notice that while there is a FOSS
outreach program for women, there is nothing specific in the Wikimania
Hackathon materials to suggest that an effort was made to attract women to
this event. There is another hackathon planned for early next year - will
engineering make the gender gap part of the goals of that event?

I also notice that the editor engagement projects don't specifically list
the gender gap or women users. I think that testing new features (such as
communication tools, notifications, teahouse-style innovations and others
that bear specifically on interactions) with female user/focus groups would
greatly improve our understanding of how these tools impact the gender gap.
For example:  If part of the plan for the growth team is to invite
anonymous editors to sign up, why not tailor some of those invitations
specifically to female anonymous editors? Then you could add a measure,
retention of female editors, to that particular growth project. The results
should be instructive.

Likewise, it would be nice to see gender-gap related goals within
VisualEditor and the user experience groups goals, and to see some focus
from the Analytics and Research teams on measuring and understanding the
gap better. Needless to say its a little disappointing that the FOSS
outreach program (which currently has 8 participants) is the only mention
of women in the entire goals document, and the gender gap is never
mentioned.

Nathan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] First _draft_ goals for WMF engineering/product

2014-06-23 Thread Nathan
On Mon, Jun 23, 2014 at 3:45 PM, Brian Wolff bawo...@gmail.com wrote:

   If part of the plan for the growth team is to invite
  anonymous editors to sign up, why not tailor some of those invitations
  specifically to female anonymous editors? Then you could add a measure,
  retention of female editors, to that particular growth project. The
 results
  should be instructive.

 At the risk of biting into something contentious... How would that
 work? What would the message say: Have you considered creating an
 account? Its free. Doubly recommended if you're a women! I have
 trouble imagining a gender-specific account creation invitation that
 doesn't sound creepy.

 Additionally, actually measuring success rates by gender would require
 knowing people's gender. Its conceivable that requiring people to
 disclose their gender could have a negative impact on the gender gap.
 (Or maybe it wouldn't. I have no idea)

 --bawolff

 



I'm not sure what the messaging is planned to look like, but an appeal
oriented towards women doesn't seem like an insurmountable obstacle. Did
you know only 10% of Wikipedia editors are women? If you are a woman
reading this, we need your help! And then you can track sign-ups through
that particular message, figuring that a substantial proportion of them
will actually be women.

And you could also ask, during sign-up, for people to self-identify
confidentially. We're trying to increase the proportion of our fellow
editors who are women, would you mind telling us if you identify as female?
[Y/N]. And then don't publish that anywhere except in aggregate.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [EE] IRC web client for Wikipedia help

2014-08-11 Thread Nathan
Newbies are going to fail hard at IRC. Pretty much all of the questions Seb
poses for a built-in newbie chat still exist with a built-in Freenode
interface, with the addition of a complicated and often difficult (not to
mention culturally... unique) environment. Much better to think along the
lines of the Teahouse, but live. You can jump into a chat queue, and people
who want to help chat with you, and you can close the chat whenever you
want, and you can't contact people outside of the queue using chat.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] changing edit summaries

2014-11-13 Thread Nathan
I can see it being useful in two circumstances:

1) As part of the oversight right, in order to edit an edit summary without
hiding the entire revision
2) A right of a user to edit their own edit summaries, if the edit summary
is blank

Since it's possible and at least some people are interested in it, I don't
see the downside of making it available in MediaWiki even if most Wikimedia
projects might not use it.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] changing edit summaries

2014-11-13 Thread Nathan
On Thu, Nov 13, 2014 at 1:09 PM, John phoenixoverr...@gmail.com wrote:

 Issues arise in the fact that malicious editors can abuse it after the
 initial review has been done. Or you can run into cases where offensive
 material is added attacking another editor, so editor B reports the issue
 and before anyone has a chance to review it editor A changes it back to
 something innocent. (rinse repeat for a while before A finally gets
 blocked, but meanwhile B is taking the brunt of abuse until an admin
 catches on) and there is no way of proving what an edit was at any given
 time.

 The biggest thing that you need to realize is that regardless of the intent
 of something, it will be abused, how and to what degree can be controlled.
 Given that just about everything in mediawiki has a paper trail, (mediawiki
 keeps logs for all actions, some are just not visible without specific
 rights) introducing a feature that doesnt is not a good idea.


I don't think anyone is unaware of the potential for abuse, but that is not
a strong argument against allowing any form of editing edit summaries. A
simple limit would take care of most forms of abuse - either limit it to
trusted users (e.g. oversight), or permit it only on blank edit summaries
and only by the original user. You can even restrict it to a single change,
and then the use case would be: Oops, I forgot to include an edit summary,
rather than adding a new revision or leaving it blank I'll just go add it
now.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feature request.

2014-11-19 Thread Nathan
On Mon, Nov 17, 2014 at 1:56 PM, James Forrester jforres...@wikimedia.org
wrote:

 On 16 November 2014 14:36, Pine W wiki.p...@gmail.com wrote:

  James: would it be possible to automatically save the text of a page to a
  user's sandbox when they encounter an edit conflict? This would overwrite
  the content of the sandbox, but that could be reverted using the normal
  history page for the sandbox.
 

 ​Hmm. Publishing content without the user clicking the Save page button a
 second time feels very icky. Also, would we need to go around and
 auto-delete these for users once the edit conflict was fixed? This doesn't
 sound like a perfect solution. There's also the issue with the user's
 sandbox not existing as an actual thing, just a hack that a couple of
 wikis have invented…

 Maybe we should make the (read only) your content box more prominent,
 appearing before the current content one? Not sure this would help more
 than it would hinder everyone for the order to be reversed.

 J.
 --


Why not just interleave or nest the conflicted edit in the history of the
page? So if you are editing revision 1, and conflict with someone elses
revision 2, save your revision as 2 and the next person's revision as 3?
There's some ugliness in the revision history to resolve (like timestamps),
and potentially other ways it could be done, but I see no reason not to
slot the conflicted edit into the history while prompting the user to merge
into a current revision.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Please comment on the draft consultation for splitting the admin role

2018-06-11 Thread Nathan
Is the risk of an attacker taking over an account with CSS/JS edit
permissions any more or less because that person knows how to use CSS/JS?
If the criteria will be that only people who know how to use CSS/JS will
get access to make those edits, I'm not sure that is perfectly tailored to
the need being identified - security from outside threats. Can we make the
edit right temporary, so someone can request it through a normal simple
process, execute their edits, and then relinquish it? It can be a right
that admins could grant to each other, as long as they can't gift it to
themselves.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Question to WMF: Backlog on bugs

2019-03-24 Thread Nathan
I think it was doomed to fail as soon as people argued that an organization
with an ~$80m annual budget had too many "resource constraints" to address
a backlog of bugs in its core product. That happened in the first five or
so replies to the thread!

On Sun, Mar 24, 2019 at 10:05 PM John Erling Blad  wrote:

> It is a strange discussion, especially as it is now about how some
> technical debts are not _real_ technical debts. You have some code,
> and you change that code, and breakage emerge both now and for future
> projects. That creates a technical debt. Some of it has a more
> pronounced short time effect (user observed bugs), and some of has a
> more long term effect (it blocks progress). At some point you must fix
> all of them.
>
> On Thu, Mar 21, 2019 at 11:10 PM Pine W  wrote:
> > It sounds like we have different perspectives. However, get the
> impression
> > that people are getting tired of the this topic, so I'll move on.
>
> I don't think this will be solved, so "move on" seems like an obvious
> choice.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ethical question regarding some code

2020-08-06 Thread Nathan
 I appreciate that Amir is acknowledging that as neat as this tool sounds,
its use is fraught with risk. The comparison that immediately jumped to my
mind is predictive algorithms used in the criminal justice system to assess
risk of bail jumping or criminal recidividism. These algorithms have been
largely secret, their use hidden, their conclusions non-public. The more we
learn about them, the more deeply flawed it's clear they are. Obviously the
real-world consequences of these tools are more severe in that they
directly lead to the incarceration of many people, but I think the
comparison is illustrative of the risks. It also suggests the type of
ongoing comprehensive review that should be involved in making this tool
available to users.

The potential misuse here to be concerned about is by amateurs with
unsociable intent, or by intended users who are wreckless or ignorant of
the risks. Major governments have the resources to easily build this
themselves, and if they care enough about fingerprinting Wikipedians they
likely already have.

I think if the tool is useful and there's a demand for it, everything about
it - how it works, who uses it, what conclusions and actions are taken as a
result of its use, etc - should be made public. That's the only way we'll
discover the multiple ways in which it will surely eventually be misused.
SPI has been using these 'techniques' in a manual way, or with
unsophisticated tools, for many years. But like any tool, the data fed into
it can be training the system incorrectly. The results it returns can be
misunderstood or intentionally misused. Knowledge of its existence will
lead the most sophisticated to beat it, or intentionally misdirect it.
People who are innocent of any violation of our norms will be harmed by its
use. Please establish the proper cultural and procedural safeguards to
limit the harm as much as possible.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ethical question regarding some code

2020-08-08 Thread Nathan
For my part, I think Amir is going way above and beyond to be so thoughtful
and open about the future of his tool.

I don't see how any part of it constitutes creating biometric identifiers,
nor is it obvious to me how it must remove anonymity of users.

John, perhaps you can elaborate on your reasoning there?

Ultimately I don't think community approval for this tool is technically
required. I appreciate the effort to solicit input and don't think it would
hurt to do that more broadly.

On Sat, Aug 8, 2020 at 5:44 PM John Erling Blad  wrote:

> Please stop calling this an “AI” system, it is not. It is statistical
> learning.
>
> This is probably not going to make me popular…
>
> In some jurisdictions you will need a permit to create, manage, and store
> biometric identifiers, no matter if the biometric identifier is for a known
> person or not. If you want to create biometric identifiers, and use them,
> make darn sure you follow every applicable law and rule. I'm not amused by
> the idea of having CUs using illegal tools to wet ordinary users.
>
> Any system that tries to remove anonymity og users on Wikipedia should have
> an RfC where the community can make their concerns heard. This is not the
> proper forum to get acceptance from Wikipedias community.
>
> And btw, systems for cleanup of prose exists for a whole bunch of
> languages, not only English. Grammarly is one, LanguageTool another, and
> there are a whole bunch other such tools.
>
> lør. 8. aug. 2020, 19.42 skrev Amir Sarabadani :
>
> > Thank you all for the responses, I try to summarize my responses here.
> >
> > * By closed source, I don't mean it will be only accessible to me, It's
> > already accessible by another CU and one WMF staff, and I would gladly
> > share the code with anyone who has signed NDA and they are of course more
> > than welcome to change it. Github has a really low limit for people who
> can
> > access a private repo but I would be fine with any means to fix this.
> >
> > * I have read that people say that there are already public tools to
> > analyze text. I disagree, 1- The tools you mentioned are for English and
> > not other languages (maybe I missed something) and even if we imagine
> there
> > would be such tools for big languages like German and/or French, they
> don't
> > cover lots of languages unlike my tool that's basically language agnostic
> > and depends on the volume of discussions happened in the wiki.
> >
> > * I also disagree that it's not hard to build. I have lots of experience
> > with NLP (with my favorite work being a tool that finds swear words in
> > every language based on history of vandalism in that Wikipedia [1]) and
> > still it took me more than a year (a couple of hours almost in every
> > weekend) to build this, analyzing a pure clean text is not hard, cleaning
> > up wikitext and templates and links to get only text people "spoke" is
> > doubly hard, analyzing user signatures brings only suffer and sorrow.
> >
> > * While in general I agree if a government wants to build this, they can
> > but reality is more complicated and this situation is similar to
> security.
> > You can never be 100% secure but you can increase the cost of hacking you
> > so much that it would be pointless for a major actor to do it.
> Governments
> > have a limited budget and dictatorships are by design corrupt and filled
> > with incompotent people [2] and sanctions put another restrain on such
> > governments too so I would not give them such opportunity for oppersion
> in
> > a silver plate for free, if they really want to, then they must pay for
> it
> > (which means they can't use that money/resources on oppersing some other
> > groups).
> >
> > * People have said this AI is easy to be gamed, while it's not that easy
> > and the tools you mentioned are limited to English, it's still a big win
> > for the integrity of our projects. It boils down again to increasing the
> > cost. If a major actor wants to spread disinformation, so far they only
> > need to fake their UA and IP which is a piece of cake and I already see
> > that (as a CU) but now they have to mess with UA/IP AND change their
> > methods of speaking (which is one order of magnitude harder than changing
> > IP). As I said, increasing this cost might not prevent it from happening
> > but at least it takes away the ability of oppressing other groups.
> >
> > * This tool never will be the only reason to block a sock. It's more than
> > anything a helper, if CU brings a large range and they are similar but
> the
> > result is not conclusive, this tool can help. Or when we are 90% sure
> it's
> > a WP:DUCK, this tool can help too but blocking just because this tool
> said
> > so would imply a "Minority report" situation and to be honest and I would
> > really like to avoid that. It is supposed to empower CUs.
> >
> > * Banning using this tool is not possible legally, the content of
> Wikipedia
> > is published under CC-BY-SA and this allows such 

[Wikitech-l] Deletion schema proposal: Bug 55398

2013-10-18 Thread Nathan Larson
In one of the earlier discussions about revising the deletion schema,
Happy-Melon pointed out, Right now everyone who has  ideas for *an*
implementation isn't working on it because they don't know if  it's *the*
implementation we want. It seems to me that if one proposal is clearly
superior, we may as well pick that one, so someone can get to work on it;
but if the proposals are about equally good, then we may as well pick one
at random, or whatever proposal is slightly better than the others, so
someone can get to work on that one.
http://thread.gmane.org/gmane.science.linguistics.wikipedia.technical/48109

The RFC over at MediaWiki.org hasn't received a lot of comment. Is there
any objection to my implementing the new field option described at
https://www.mediawiki.org/wiki/Requests_for_comment/Page_deletion ?

I.e., I would add a new field, page.pg_deleted (analogous to
revision.rv_deleted). Also, I would add revision.rv_logid to store the
log_id of the deletion event. Upon restoring a page, only the revisions
pertaining to the selected deletion event(s) would be restored.  So,
suppose you revision delete some revisions from a page for copyvio reasons.
Those revisions are now updated as rv_deleted=[whatever value we want to
use to signify the kind of access restrictions imposed by the deletion],
and revision.rv_log_id=1, assuming that is the first log action ever
performed on that wiki.

Later, you delete the whole page (update rv_log_id=2 for the remainder of
that page's revisions) for notability reasons. Then you decide you were
mistaken about notability, but the copyvios would still be a problem. You
undelete the page, selecting only that second deletion action. You only
restore the revisions that have rv_logid=2, and leave the rv_logid=1
revisions deleted.  This will render the archive table obsolete, so it can
be deprecated/eliminated.

I would like to code this for v1.23.
https://bugzilla.wikimedia.org/show_bug.cgi?id=55398 Thanks.

-- 
Nathan Larson
https://mediawiki.org/wiki/User:Leucosticte
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Page title length

2013-10-24 Thread Nathan Larson
On Thu, Oct 24, 2013 at 1:01 PM, Élie Roux elie.r...@telecom-bretagne.euwrote:

 Dear MediaWiki developers,

 I'm responsible for the development of a new Wiki that will contain many
 Tibetan resources. Traditionnaly, Tibetan titles of books or even parts of
 books are extremely long, as you can see for instance here :
 http://www.tbrc.org/#!rid=**W23922 http://www.tbrc.org/#!rid=W23922,
 and sometimes too long for Mediawiki, for instance the title of

 http://www.tbrc.org/?locale=**bo#library_work_ViewByOutline-**
 O01DG1049094951|W23922http://www.tbrc.org/?locale=bo#library_work_ViewByOutline-O01DG1049094951%7CW23922

 , which is

 ཤངས་པ་བཀའ་བརྒྱུད་ཀྱི་གཞུང་བཀའ་**ཕྱི་མ་རྣམས་ཕྱོགས་གཅིག་ཏུ་བསྒྲི**
 ལ་བའི་ཕྱག་ལེན་བདེ་ཆེན་སྙེ་མའི་**ཆུན་པོ་

 . This title is around 90 Tibetan characters, but each caracter being 3
 bytes, it exceeds the limit for title length of 256 bytes that MediaWiki
 has.

 So I have two questions:

 1. If I change this limit to 1023 in the structure of the database
 ('page_title' field of the 'page' base), will other things (such as search
 engine) break? Is there a way to change it more cleanly?

 2. Could I propose you to make this limit 1023 instead of 255 (or to make
 it configurable easily)? This would allow at least 256 characters (even for
 asian languages) instead of 256 bytes, which seems more consistant with the
 fact that MediaWiki is well internationalized.

 Thank you in advance,
 --
 Elie


Wouldn't ar.title, log_title, rc_title, and so on also need to have their
sizes increased? I didn't see any bug report proposing an increase in page
title size, although there was this one about comment size.
https://bugzilla.wikimedia.org/show_bug.cgi?id=4715
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Page title length

2013-10-24 Thread Nathan Larson

 Changing the maximum length of a title will require a huge number of
 changes and is not easy to make configurable so we should do this only
 one time changing the maximum length of title to the maximum we can
 feasibly make it.



 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


Reposted to https://www.mediawiki.org/wiki/Page_title_size_limitations .

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte
Distribution of my contributions to this email is hereby authorized
pursuant to the CC0 licensehttp://creativecommons.org/publicdomain/zero/1.0/
.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Architectural leadership in Wikimedia's technical community

2013-11-05 Thread Nathan Larson
On Tue, Nov 5, 2013 at 9:10 PM, Yuvi Panda yuvipa...@gmail.com wrote:

 (snip)
 I actually like the formalism a bit - since it at least makes sure
 that they don't rot. BDFLs are good.


Does it keep them from rotting? It looks like of the 60 RFCs in draft or in
discussion, 24 were last updated before 2013.
https://www.mediawiki.org/wiki/User:Leucosticte/RFCs_sorted_by_%22updated%22_date

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte
Distribution of my contributions to this email is hereby authorized
pursuant to the CC0 licensehttp://creativecommons.org/publicdomain/zero/1.0/
.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Bugzilla users have restricted accounts

2013-11-09 Thread Nathan Larson
Why not make assigning bugs work sort of like FlaggedRevs -- as a new user,
you can make changes, but the changes won't take effect until they're
reviewed and approved? After a bureaucrat sees that you're behaving
responsibly, or after you've made a certain number of changes that have
been approved, then your changes are automatically approved.

With that system, you don't have to make a request of any particular
person; it's whoever is the first to see that there are changes that need
to be reviewed.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] The interwiki table as a whitelist of non-spammy sites

2013-11-13 Thread Nathan Larson
TL;DR: How can we collaboratively put together a list of non-spammy sites
that wikis may want to add to their interwiki tables for whitelisting
purposes; and how can we arrange for the list to be efficiently distributed
and imported?

Nemo bis points out that
Interwikihttps://www.mediawiki.org/wiki/Extension:Interwikiis the
easiest way to manage whitelisting since nofollow isn't applied to
interwiki links. Should we encourage, then, wikis to make more use of
interwiki links? Usually, MediaWiki installations are configured so that
only sysops can add, remove, or modify interwiki prefixes and URLs. If a
user wants to link to another wiki, but it's not on the list, often he will
just use an external link rather than asking a sysop to add the prefix
(since it's a hassle for both parties and often people don't want to bother
the sysops too much in case they might need their help with something else
later). This defeats much of the point of having the interwiki table
available as a potential whitelist, unless the sysops are pretty on top of
their game when it comes to figuring out what new prefixes should be added.
In most cases, they probably aren't; the experience of Nupedia shows that
elitist, top-down systems tend not to work as well as egalitarian,
bottom-up systems.

Currently, 
interwiki.sqlhttps://git.wikimedia.org/blob/mediawiki%2Fcore//maintenance%2Finterwiki.sqlhas
100 wikis, and there doesn't seem to be much rhyme or reason to which
ones are included (e.g. Seattlewiki?) I wrote
InterwikiMaphttps://www.mediawiki.org/wiki/Extension:InterwikiMap,
which dumps the contents of the wiki's interwiki table into a backup page
and substitutes in its place the interwiki table of some other wiki (e.g. I
usually use Wikimedia's), with such modifications as the sysops see fit to
make. The extension lets sysops add, remove and modify interwiki prefixes
and URLs in bulk rather than one by one through Special:Interwiki, which is
a pretty tedious endeavor. Unfortunately, as written it is not a very
scalable solution, in that it can't accommodate very many thousand wiki
prefixes before the backup wikitables it generates exceed the capacity of
wiki pages, or it breaks for other reasons.

I was thinking of developing a tool that WikiIndex (or some other wiki
about wikis) could use to manage its own interwiki table via edits to
pages. Users would add interwiki prefixes to the table by adding a
parameter to a template that would in turn use a parser function that, upon
the saving of the page, would add the interwiki prefix to the table.
InterwikiMap could be modified to do incremental updates, polling the API
to find out what changes have recently been made to the interwiki table,
rather than getting the whole table each time. It would then be possible
for WikiIndex (or whatever other site were to be used) to be the
wikisphere's central repository of canconical interwiki prefixes. See
http://wikiindex.org/index.php?title=User_talk%3AMarkDilleydiff=172654oldid=172575#Canonical_interwiki_prefixes.2C_II

But there's been some question as to whether there would be much demand for
a 200,000-prefix interwiki table, or whether it would be desirable. It
could also provide an incentive for spammers to try to add their sites to
WikiIndex. See
https://www.mediawiki.org/wiki/Talk:Canonical_interwiki_prefixes

It's hard to get stuff added to meta-wiki's interwiki
maphttps://meta.wikimedia.org/wiki/Interwiki_mapbecause one of the
criteria is that the prefix has to be one that would be
used a lot on Wikimedia sites. How can we put together a list of non-spammy
sites that wikis would be likely to want to have as prefixes for nofollow
whitelisting purposes, and distribute that list efficiently? I notice that
people are more likely to put together lists of spammy than non-spammy
sites; see e.g. Freakipedia's
listhttp://freakipedia.net/index.php5?title=Spam_Site_List.
(Hmm, I think I'll pimp my websites to that wiki when I get a chance; the
fact that the spam isn't just removed but put on permanent record in a
public denunciation means it's a potential opportunity to gain exposure for
my content. They say there's no such thing as bad publicity. ;) )

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte
Distribution of my contributions to this email is hereby authorized
pursuant to the CC0 licensehttp://creativecommons.org/publicdomain/zero/1.0/
.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The interwiki table as a whitelist of non-spammy sites

2013-11-14 Thread Nathan Larson
On Thu, Nov 14, 2013 at 12:18 PM, Quim Gil q...@wikimedia.org wrote:

 With my hat of third party wiki admin I personally agree with all this.

 Another possibility further down in the roadmap:

 Imagine a World in which you could transclude in your wiki content from
 a subset of this interwiki table of wikis, based on license
 compatibility and whatever other filters. To avoid performance problems,
 the check with the sources could be done by a cron periodically, etc.

 The most interesting part of this proposal is to start an interwiki
 table of friendly and compatible wikis willing to ease the task of
 linking and sharing content among them. The attributes of the table and
 a decentralized system to curate and maintain the data could open many
 possibilities of collaboration between MediaWiki sites.


Is there reason to think that a decentralized system would be likely to
evolve, or that it would be optimal? It seems to me that most stuff in the
wikisphere is centered around WMF; e.g. people usually borrow templates,
the spam blacklist, MediaWiki extensions, and so on, from WMF sites. Most
wikis that attempted to duplicate what WMF does have failed to catch on;
e.g. no encyclopedia that tried to copy Wikipedia's approach (e.g.
allegedly neutral point of view and a serious, rather than humorous, style
of writing) came close to Wikipedia's size and popularity, and no wiki
software caught on as much as MediaWiki. It's just usually more efficient
to have a centralized repository and widely-applied standards so that
people aren't duplicating their labor too much.

But if one were to pursue centralization of interwiki data, what would be
the central repository? Would WMF be likely to be interested? Hardly
anything at https://meta.wikimedia.org/wiki/Proposals_for_new_projects has
been approved for creation, so I'm not sure how one would go about getting
something like this established through WMF.

Some advantages of WMF are that we can be pretty confident its projects
will be around for awhile, and none of them are clogged up with the kind of
advertising we see at, say, Wikia. Non-WMF wikis come and go all the time;
one never knows when the owner will get hit by a bus, lose interest, etc.
and then the users are left high and dry. That could be a problem if the
wiki in question is a central repository that thousands of wikis have come
to rely upon.

Perhaps the MediaWiki Foundation could spearhead this? Aside from its
nonexistence, I think that organization could be a pretty good venue for
getting this done. I'll have to bring this up with some of my imaginary
friends who sit on the MWF board of trustees.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia Issue

2013-11-14 Thread Nathan Larson
On Thu, Nov 14, 2013 at 2:05 PM, Derric Atzrott 
datzr...@alizeepathology.com wrote:

 The issue seems to be confined to Wikipedia.  I am not seeing it affect
 Mediawiki.org

 Additionally the issue seems to be affecting more than just the English
 language Wikipedia.  The lojban language Wikipedia is also experiencing
 difficulties.

 Thank you,
 Derric Atzrott


I'm getting it at MediaWiki.org intermittently too. Just lost an edit, in
fact (my cache didn't save it for some reason; maybe it expired).
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The interwiki table as a whitelist of non-spammy sites

2013-11-16 Thread Nathan Larson
On Thu, Nov 14, 2013 at 1:08 PM, Quim Gil q...@wikimedia.org wrote:

 As mentioned by Mark and quoted in my email, http://wikiapiary.com/
 could be a good starting point.

 Just improvising a hypothetical starting point for a process to maintain
 the decentralized interwiki table:

 In order to become a candidate, a wiki must have the extension installed
 and a quantifiable score based on age, size, license, and lack of
 reports as spammer.

 The extension could perhaps check how much a wiki is linked by how many
 wikis pof which characteristics, calculating a popularity index of
 sorts. Maybe you can even have a classification of topics filtering the
 langage and type of content that matters to your wiki. By default, only
 wikis above some popularity index would be included in your local
 interwiki table. The admins could fine tune locally.

 The master interwiki table could be hosted in Wikiapiary or wherever. It
 would be mirrored in some wahy by the wikis with the extension installed
 willing to do so.

 The maintenance of the table itself doesn't even look like a big deal,
 compared to developing the extension and adding new interwiki features.
 It would be based on the userbase of wiki installing the extension.

 Whether Wikimedia projects join the interwiki party of not, that would
 depend on the extension being ready for Wikimedia adoption annd a
 decision to deploy it. But that would be a Wikimedia discussion, not a
 Interwiki project discussion.

 As said, all of the above is improvised and hypothetical. Sorry in
 advance for any planning flaws.  :)


Do we really need to set criteria for a wiki being a candidate? Are we
trying to keep the number of approved interwiki links down? I had in mind
just letting everyone have a prefix and a URL that we would distribute.

Some might get the more preferable prefixes, though; e.g. we would have
some sort of tiebreaker if, say, there were two wikis wanting to call
themselves Foowiki and get the Foowiki: prefix. I guess we should continue
this discussion on WikiApiary. I want to try to get someone (Jamie
Thingelstad?) with authority to install extensions to give the go-ahead on
the specs for the extension; once they're approved, there will be no reason
not to start writing it. I was originally planning to work with WikiIndex
but WikiApiary seems like a better idea. By the way, I like how they note
that they're monitoring over
9,300https://encyclopediadramatica.es/Over_9000wikis.

It might help that WikiApiary runs SMW. Maybe the extension could be
written to interact with it in some way that would be more elegant than
using tag or parser functions to insert metadata into the database à la the
{{setbpdprop: }} of
Extension:BedellPenDragonhttps://www.mediawiki.org/wiki/Extension:BedellPenDragon
.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The interwiki table as a whitelist of non-spammy sites

2013-11-16 Thread Nathan Larson
On Sat, Nov 16, 2013 at 3:19 AM, Nathan Larson
nathanlarson3...@gmail.comwrote:

 Do we really need to set criteria for a wiki being a candidate? Are we
 trying to keep the number of approved interwiki links down? I had in mind
 just letting everyone have a prefix and a URL that we would distribute.


I meant to say, approved interwiki prefixes. Anyway, this has been filed
as bug 57131 https://bugzilla.wikimedia.org/show_bug.cgi?id=57131.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Applying nofollow only to external links added in revisions that are still unpatrolled

2013-11-17 Thread Nathan Larson
Following the mediawiki-l
discussionhttp://lists.wikimedia.org/pipermail/mediawiki-l/2013-November/042038.htmlabout
$wgNoFollowLinks and various other discussions, in which some
discontent was expressed with the current two options of either applying or
not applying nofollow to all external links, I wanted to see what support
there might be for applying nofollow only to external links added in
revisions that are still unpatrolled (bug
42599https://bugzilla.wikimedia.org/show_bug.cgi?id=42599
).

How common do you think it would be for a use case to arise in which one
could be confident that a revision's being patrolled means that the
external links added in that revision have been adequately reviewed for
spamminess? Nemo had mentioned sysadmins would be interested in this only
if their wiki has a strict definition of what's patrollable which matches
the assumptions here. In my experience, spam is pretty easy to spot
because the bots aren't very subtle about it.

I would think that if someone went around marking such obviously spammy
edits as patrolled, that if there were any bureaucrats around who cared
about keeping spam off the wiki, his patrol rights would end up getting
taken away. Spam is a form of vandalism, so it would fall under the duties
of patrollers. At Wikipedia, RecentChanges patrollers are expected to be on
the lookout for spam.
https://en.wikipedia.org/wiki/Wikipedia:Recent_changes_patrol#Spam

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Applying nofollow only to external links added in revisions that are still unpatrolled

2013-11-18 Thread Nathan Larson
To aggregate some of the arguments and counter-arguments, I posted
https://www.mediawiki.org/wiki/The_dofollow_FAQ and
https://www.mediawiki.org/wiki/Manual:Costs_and_benefits_of_using_nofollow.
It does seem, from my googling of what the owners of smaller wikis
have
to say about it, that nofollow is less popular outside of WMF with many of
those wiki owners who have taken the time to analyze the issue. On the
other hand, it could be that people who were happy with the default felt
less dissatisfied with MediaWiki devs' decision and therefore didn't feel
as much need to voice their opinions, since they had already gotten their
way and didn't have to take any measures to override the default.

I do think the implications of changing how nofollow is applied are very
different on, say, Wikipedia than they would be on a small or even
medium-sized wiki where the average user watches RecentChanges instead of a
watchlist. In a small town, you can leave your doors unlocked and get away
with it because you don't have as much traffic coming through and the
neighbors would notice and care about (for curiosity, if no other reason)
the presence of anyone who seemed out of place. It's the same way on these
small wikis; it's rare than anyone comes along to try to subtly add a spam
link, and when they do, it's noticed. Likewise, if someone starts marking
spammy edits as patrolled, that gets noticed.

Spambots are not able yet to be subtle, and the labor required to get
accustomed to the norms of a wiki and to become fluent enough in the native
language to fit in require a skilled labor that is more expensive than that
required to simply pass a CAPTCHA. So, I think that putting dofollow on
patrolled external links would be okay especially on smaller wikis, as the
patrol would stop the spambots from getting a pagerank boost and the labor
costs would deter the subtler ones. Even on Wikipedia, those fighting spam
can take advantage of the same economies of scale as those adding spam,
such as using pattern recognition on the entire wiki to catch people, or
blacklisting individual spammers and taking measures to keep them out (on
the smaller wikis, a person caught spamming can just go to another wiki,
but if you're caught spamming on Wikipedia, there isn't another site of
Wikipedia's size and scope you can go to.)

To say that patrolling wouldn't do enough to keep spam out is basically to
say, at least to some extent, that patrolling is not a very effective
system and that the wiki way doesn't work very well. If Google agrees, they
can stop giving wikis in general, or certain wikis, such influence over
pagerank. The spammers have market incentives to become more sophisticated,
but so does Google, since their earnings depend on keeping their search
results relevant and useful, so that people don't switch to competitors
that do a better job.

The question of what the default configuration should be, or what
configuration should be used on WMF sites, can be addressed in other bugs
besides this one. It doesn't take much coding to change a default setting
from true to false. For now, I would just like to implement the feature
and make it available for those wikis who want to use it. So, is there
support for putting this in the core as an optional feature, and is there
anyone who will do the code review if I write this?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Applying nofollow only to external links added in revisions that are still unpatrolled

2013-11-18 Thread Nathan Larson
On Mon, Nov 18, 2013 at 12:58 PM, Gabriel Wicke gwi...@wikimedia.orgwrote:

 On 11/18/2013 09:46 AM, Nathan Larson wrote:
  I do think the implications of changing how nofollow is applied are very
  different on, say, Wikipedia than they would be on a small or even
  medium-sized wiki

 As I said, at least for Google there should be no difference as it
 ignores rel=nofollow on MediaWiki-powered sites anyway. See
 https://bugzilla.wikimedia.org/show_bug.cgi?id=52617.

 Gabriel


Do we have any way of knowing that Yong-Gang Wang of Google is correct
about this? I sent a message to this
individualhttps://plus.google.com/105349418663822362024/about(hopefully
it's the same guy) asking for more information. It seems like a
pretty major departure from past Google policy/practice.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Can we see mw 1.22 today?

2013-11-29 Thread Nathan Larson
On Fri, Nov 29, 2013 at 9:43 PM, Sen Slybe kik...@gmail.com wrote:

 It's very excited to upgrade for the new version,so I can final have a try
 for the visual editor...

 Is here not suppose ask this kind question ,I sorry if that's so.I just
 really like mediawikii and WANA thx for you hard work

 Regrades
 Sen


Why settle for 1.22 when you can have 1.23alpha? The more people are
successfully encouraged to use it, the more people will encounter and
possibly report any bugs in it, benefiting almost everyone.
https://www.mediawiki.org/wiki/Download_from_git#Download
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Can we see mw 1.22 today?

2013-11-29 Thread Nathan Larson
On Fri, Nov 29, 2013 at 10:53 PM, Sen kik...@gmail.com wrote:

 i wana to,but i dont know any easy way to switch the version,the upgrade
 for me is use the patch,but i mod some system file,so if everytime i need
 to change them,that's seems pretty crazy..


If you're finding you need to hack MediaWiki because it's not extensible or
configurable enough, feel free to submit a bug
reporthttps://www.mediawiki.org/wiki/How_to_report_a_bugasking for
the new
hooks https://www.mediawiki.org/wiki/Manual:Hooks or configuration
settings https://www.mediawiki.org/wiki/Manual:Configuration_settingsthat
should be added. Who knows, it might benefit someone else too. Then
you can submit your modifications as extension code, and others can
collaborate on it with you.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] can i mod the [~~~~~]magic word's data style?

2013-11-29 Thread Nathan Larson
On Fri, Nov 29, 2013 at 11:30 PM, Brian Wolff bawo...@gmail.com wrote:

 On 11/29/13, Sen kik...@gmail.com wrote:
  i know it's depend the mediawiki language,but can i change it by my own?i
  use chinese mediawiki,but i wana the mediawiki signature time format use
  english.
 
  sorry again,if this question should not ask in this maillist,plz let me
  know i should send which maillist...i very know to the maillist,and my
  english is pretty bad too.
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 Hi.

 This question is probably better suited to the mediawiki-l mailing
 list then this one, but in any case here is the answer:


 There's two internationalization changes in play here. First of all,
 month names get translated. This is something that you cannot change
 without modifying core (as far as I know).

 Second the date format changes (for example, where in the date the
 year is put). This one you can control. Just add to the bottom of
 LocalSettings.php:

 $wgDefaultUserOptions['date'] = 'mdy';

 and you will get something like 01:22, 11月 30, 2013 (UTC) instead of
 2013年11月30日 (六) 01:26 (UTC) . As noted the month names are still
 shown in Chinese.

 If you do something like:

 $wgDefaultUserOptions['date'] = 'ISO 8601';

 You will get dates that are all numeric like 2013-11-30T01:28:42
 (UTC). Other options for the date format preference include dmy, ymd.
 By default it has the value default which is equivalent to the value
 zh in the chinese internationalization.

 --Bawolff

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


To implement what he is asking for, I hacked Parser::pstPass2() in
includes/parser/Parser.php as follows:

Change:
 $key = 'timezone-' . strtolower( trim( $tzMsg ) );
to:
 $key = 'signature-timezone-' . strtolower( trim( $tzMsg ) );

Change:
 $d = $wgContLang-timeanddate( $ts, false, false ) .  ($tzMsg);
to:
 $en = Language::factory ( 'en' );
 $d = $en-timeanddate( $ts, false, false ) .  ($tzMsg);

Next, write the following PHP script and run it:

?php
$abbreviations = array_keys ( DateTimeZone::listAbbreviations() );
foreach ( $abbreviations as $abbreviation ) {
echo MediaWiki:Signature-timezone-$abbreviation\n;
}

This will give you the list of MediaWiki: namespace pages you need to
create, e.g. MediaWiki:Signature-timezone-utc,
MediaWiki:Signature-timezone-est, etc. There's probably an easier way,
though.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] can i mod the [~~~~~]magic word's data style?

2013-11-29 Thread Nathan Larson
On Sat, Nov 30, 2013 at 1:09 AM, Brian Wolff bawo...@gmail.com wrote:

 
  There's two internationalization changes in play here. First of all,
  month names get translated. This is something that you cannot change
  without modifying core (as far as I know).
 

 Actually what am I talking about. The obvious solution to this is to just
 create the page MediaWiki:January on your wiki (and the other months),
 editing the month names to whatever you want.

 MediaWiki will use whatever is on the page MediaWiki:January as the word
 for January, which allows you to modify the output without editing
 mediawiki source code.


I was going to do it that way but he said that he only wanted the signature
timestamps to be in English and the rest of the site in Chinese, and I was
concerned that maybe MediaWiki:January might be used for other purposes as
well.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] can i mod the [~~~~~]magic word's data style?

2013-11-29 Thread Nathan Larson
On Sat, Nov 30, 2013 at 1:12 AM, Sen Slybe kik...@gmail.com wrote:

 This's my first time talk at a open source  mail list,I got say this is
 very feel great,I personally use mediawiki to record what I seeing,what I
 think,I really like it,maybe first it's kind hard to learn like about the
 wired wiki text language,special the table syntax ,I still this it's
 suck,to many [|] in it,but after that,I just can't help myself to love
 it,it have a lot expend way to use,I can just add a js gadget (which I had
 add a lot),and if I WANA go far,I can make a PHP extension,it's very easy
 to expend,just never like other web program I have use.
 Sure,the mediawiki may look like past time(old ui like a text editor),not
 fashion,but for me,it's good enough to back the simple easy way.
 But the most magic I just have seen by now,the guy who make mw,are bunch
 stronger guy,but just make it work ,and just better and better:)


Oh good, another
testimonialhttps://www.mediawiki.org/wiki/MediaWiki_testimonials.
Yes, MediaWiki is a beautiful and wonderful product; some people criticize
the codebase (particularly the UI; I have to admit, it's not all that
intuitive, but those who stick around typically learn to, or have learned
to, love it) but I personally find it to be much better-documented (and
more fun to work with and contribute to) than some of the software I had to
help develop in the corporate world. Of course, any endeavor in which you
can choose to pursue your interests rather than being tied down to what the
employer demands tends to be more fun. Welcome to the
20%http://www.codinghorror.com/blog/2007/11/the-two-types-of-programmers.html,
if that's the direction you choose to go in!

The only major downer is the code review backlog, but you can often get
around that by writing as much of your code as possible as extensions and
choosing the direct-push instead of code review option for the repository.
I guess in a way, I'm probably contributing to the problem by not doing
many code reviews, but I haven't contributed enough code to show up very
often in git blame, so people don't add me as a reviewer very often. When
they do add me (seemingly randomly), I often think This is a complicated
change, I know nothing about this part of the codebase, and it would take
hours/days to ramp up my level of knowledge to the point where I'd be able
to make an informed comment or decision so I remove myself.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Please use sitenotice when a new version of software is deployed

2013-12-05 Thread Nathan Larson
On Thu, Dec 5, 2013 at 5:00 PM, Dan Garry dga...@wikimedia.org wrote:

 How about getting this stuff included in the Signpost? I think that's a
 good medium for it.

 There used to be a Technology report but I've not seen it for a while...

 Dan


BRION, they called it.
https://en.wikipedia.org/wiki/Wikipedia:Bugs,_Repairs,_and_Internal_Operational_NewsI
think they should bring it back.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FWD: [Bug 58236] New: No longer allow gadgets to be turned on by default for all users on Wikimedia sites

2013-12-11 Thread Nathan Larson
On Wed, Dec 11, 2013 at 2:38 PM, Tyler Romeo tylerro...@gmail.com wrote:

 I can definitely understand the reasoning behind this. Right now with both
 Gadgets and common.js we are allowing non-reviewed code to be injected
 directly into every page. While there is a bit of trust to be had
 considering only administrators can edit those pages, it is still a
 security risk, and an unnecessary one at that.

 I like the idea of having gadgets (and any JS code for that matter) going
 through Gerrit for code review. The one issue is the question of where
 would Gadget code go? Would each gadget have its own code repository? Maybe
 we'd have just one repository for all gadgets as well as common.js
 (something like operations/common.js)? I don't think sending wiki edits to
 Gerrit is too feasible a solution, so if this were implemented it'd have to
 be entirely Gerrit-based.


Could FlaggedRevs, perhaps with some modifications, be used to implement a
review process?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FWD: [Bug 58236] New: No longer allow gadgets to be turned on by default for all users on Wikimedia sites

2013-12-11 Thread Nathan Larson
On Wed, Dec 11, 2013 at 3:30 PM, Tyler Romeo tylerro...@gmail.com wrote:

 In this case we should promptly work to fix this issue. To be honest, the
 only difficult part of our code review process is having to learn Git if
 you do not already know how to use it. If there were a way to submit
 patchsets without using Git somehow (maybe some kind of desktop
 application), it may make things easier.


I agree, it would make life a lot easier! Something like TortoiseSVN, but
for Git, then?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mailing list etiquette and trolling

2013-12-11 Thread Nathan Larson
On Thu, Dec 12, 2013 at 12:03 AM, Rob Lanphier ro...@wikimedia.org wrote:

 Or perhaps he merely suggested something that you disagreed with (or didn't
 understand), without losing [his] mind or being a troll?

 I'm a little skeptical about Jeroen's GitHub suggestion, but it seems like
 something reasonable people can disagree about.  Could we not accuse Jeroen
 or anyone else of being a troll or losing his/her mind for floating an
 honest proposal?


From my experience on the Internet, I suspect that most of the times when
people are accused of trolling, they are actually serious; and most of the
times when people are trolling, they're taken seriously and no one
expresses any suspicion that they were trolling.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FW: Mass file undeletion tools?

2013-12-30 Thread Nathan Larson
On Mon, Dec 30, 2013 at 9:48 AM, Tuszynski, Jaroslaw W. 
jaroslaw.w.tuszyn...@leidos.com wrote:

 Lately someone deleted 70 pages from the middle of several thousand page
 Russian encyclopedia used by Wikisource. Everything was done by the
 book: since  the uploader forgot to add a license and did not fixed the
 issue when notified. However deletion was done without checking the
 licenses in the other pages from the book. See
 https://commons.wikimedia.org/wiki/User_talk:Lozman#Copyright_status:_Fi
 le:.D0.A0.D1.83.D1.81.D1.81.D0.BA.D0.B8.D0.B9_.D1.8D.D0.BD.D1.86.D0.B8.D
 0.BA.D0.BB.D0.BE.D0.BF.D0.B5.D0.B4.D0.B8.D1.87.D0.B5.D1.81.D0.BA.D0.B8.D
 0.B9_.D1.81.D0.BB.D0.BE.D0.B2.D0.B0.D1.80.D1.8C_.D0.91.D0.B5.D1.80.D0.B5
 .D0.B7.D0.B8.D0.BD.D0.B0_4.2_001.jpg
 https://commons.wikimedia.org/wiki/User_talk:Lozman#Copyright_status:_F
 ile:.D0.A0.D1.83.D1.81.D1.81.D0.BA.D0.B8.D0.B9_.D1.8D.D0.BD.D1.86.D0.B8.
 D0.BA.D0.BB.D0.BE.D0.BF.D0.B5.D0.B4.D0.B8.D1.87.D0.B5.D1.81.D0.BA.D0.B8.
 D0.B9_.D1.81.D0.BB.D0.BE.D0.B2.D0.B0.D1.80.D1.8C_.D0.91.D0.B5.D1.80.D0.B
 5.D0.B7.D0.B8.D0.BD.  .


https://www.mediawiki.org/wiki/Extension:UndeleteBatch should work.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha filter list

2014-01-01 Thread Nathan Larson
On Wed, Jan 1, 2014 at 2:31 AM, Benjamin Lees emufarm...@gmail.com wrote:

 I checked out the registration form for a white supremacist forum, and they
 just use reCAPTCHA.  No doubt they'll be developing a CAPTJCA or CAPTMCA
 soon enough.

 There are likely a number of strings that should be added to the default
 blacklist.  Patches welcome!


Yes, sites that want to screen out users with certain political, religious,
etc. sensibilities could, with some small tweaks to the code, create a
ConfirmEdit whitelist that causes *all* CAPTCHAs to contain words
calculated to offend those groups. Actually, with QuestyCaptcha, this is
already possible. One could include in $wgCaptchaQuestions answers that
require the user to type in strings denigrating certain groups or deities;
swearing allegiance to certain causes or entities; or stating that one has
certain stigmatized attractions, preferences or desires or engages in
certain stigmatized behaviors. One could also use it to screen for a
certain level of intelligence, knowledge, etc. in a given field by asking
questions only certain people would be able to answer. The possibilities
are endless once one's main priority becomes to repel or screen out, rather
than attract, most potential users.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha filter list

2014-01-04 Thread Nathan Larson
On Wed, Jan 1, 2014 at 2:31 AM, Benjamin Lees emufarm...@gmail.com wrote:

 On Wed, Jan 1, 2014 at 2:00 AM, Tim Landscheidt t...@tim-landscheidt.de
 wrote:
 I checked out the registration form for a white supremacist forum, and they
 just use reCAPTCHA.  No doubt they'll be developing a CAPTJCA or CAPTMCA
 soon enough.


 Conversely, one could use an empathy CAPTCHA to try to screen out that
type of crowd (although it might merely screen out the politically
incorrect). http://www.wired.com/threatlevel/2012/10/empathy-captcha/
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Inclupedia: Developers wanted

2014-01-13 Thread Nathan Larson
TLDR: I seek fellow developers with whom to collaborate on creating the
largest and most inclusive wiki in the world, Inclupedia.
http://meta.inclumedia.org/wiki/Inclupedia

Inclupedia is a project to make available, on an
OpenEdithttp://wikiindex.org/Category:OpenEditwiki, pages deleted
from Wikipedia for notability reasons, as well as
articles created from scratch on Inclupedia. Thus, it will combine elements
of Deletionpedia https://en.wikipedia.org/wiki/Deletionpedia and
Wikinfohttp://wikiindex.org/Wikinfoas well as the various Wikipedia
mirrors https://en.wikipedia.org/wiki/Wikipedia:Mirrors_and_forks. It
will be, in other words, a supplement to, and an up-to-date mirror of,
Wikipedia content.

Inclupedia seeks to accomplish the entire
inclusionisthttps://en.wikipedia.org/wiki/Deletionism_and_inclusionism_in_Wikipediaagenda
through technical means, rather than through a political solution
that would require persuading deletionists and moderates to change their
wiki-philosophies. Inclupedia will have no notability requirements for
articles, and will let people post (almost) whatever they want in
userspace. It will also, however, learn from the failures of the various
Wikipedia forks that could not sustain much activity because they had no
way of becoming a comprehensive encyclopedia without duplicating
Wikipedians' labor.

Complete, seamless, and continuous integration with Wikipedia is required.
That is what will enable Inclupedia to be different from those ill-fated
aspirants to the throne whose abandoned, rotting carcasses now litter the
wikisphere. Inclupedia aspires to be the largest and most inclusive wiki in
the world, since the set of Inclupedia pages (and revisions) will always be
a superset https://en.wiktionary.org/wiki/superset of the set of
Wikipedia pages (and revisions).

People sometimes ask, Hasn't this already been done? It would seem that
it hasn't, which is why so much of the implementing code has to be designed
and developed rather than borrowed or reverse-engineered. In some ways, the
closest project to this one may have been been the various proprietary
sites that used the Wikimedia update feed
servicehttps://meta.wikimedia.org/wiki/Wikimedia_update_feed_serviceto
stay continuously up-to-date with Wikipedia, but to my knowledge none
of
them used MediaWiki as their engine, and their inner workings are a
mystery. Those also tended to be read-only rather than mass collaborative
sites.

The core of what needs to be done is (1) developing a bot(s) to pull
post-dump data from Wikipedia and push it to Inclupedia, (2) developing
capability to merge received data into Inclupedia without losing any data
or suffering inconsistencies, e.g. resulting from collisions with existing
content (as might happen, e.g. in mirrored page moves involving
destinations that already exist on Inclupedia), and (3) developing all the
other capabilities involved in running a site that's both a mirror and a
supplement, e.g. locking mirrored pages from editing by Inclupedians
(unless there will be forking/overriding capability).

I can't exactly post a bug to MediaZilla saying Create Inclupedia and
then have a bunch of different bugs it depends on, because non-WMF projects
are beyond the scope of MediaZilla. Some bugs (e.g. bug
59618https://bugzilla.wikimedia.org/show_bug.cgi?id=59618)
concerning Inclupedia-reliant functionality are already in MediaZilla, but
there will inevitably arise completely Inclupedia-specific matters that
need to be dealt with in a different venue. Presumably, it'll be necessary
to create a whole new infrastructure of bug reporting, mailing lists, IRC
channels, etc. But, I want to get it right from the beginning, since this
is an opportunity to start from scratch (e.g. maybe there is a better code
review tool than Gerrit?) I have created Meta-Inclu as a venue for project
coordination.

Mostly, I would like help with design decisions, code review, etc. It's
such a big project, it seems almost overwhelming to contemplate doing
singlehandedly, but it's probably doable if there are a few people involved
who can bounce ideas off one another, provide moral support, etc. So, if
you are interested, feel free to email back or create an account at
Meta-Inclu, and we can begin discussing the details of implementation.
http://meta.inclumedia.org/

If there were to be insufficient volunteer support for implementing this
wiki, then the next step might be to try to get funding to pay developers.
There's no guarantee that such funding would be obtainable, though, or that
it wouldn't come with significant strings attached, that would conflict
with the basic principles and vision of the site, or lead to a lot of (what
I might consider) undesirable technical decisions being made. But we do
what we have to do to make what we are passionate about a reality, to the
extent that's possible given the resources at hand. Thanks,

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte

Re: [Wikitech-l] Inclupedia: Developers wanted

2014-01-13 Thread Nathan Larson
On Mon, Jan 13, 2014 at 2:27 PM, Brian Wolff bawo...@gmail.com wrote:

 Id say that http://getwiki.net/-GetWiki:1.0 was similar to your superset
 concept (minus the merging part)


Yeah, per WikiIndex http://wikiindex.org/GetWiki, Instead of red links,
GetWiki uses green links to point to articles which do not exist locally.
When the user follows such a link, GetWiki tries to dynamically fetch it
from the wiki designated as an external source (in Wikinfo's case, the
English Wikipedia), renders and displays the article text. A local copy is
created only if the page is edited. Effectively, Wikinfo therefore provides
a transparent 'wrapper' around Wikipedia pages which have not yet been
copied.

That sounds pretty easy to implement, compared to what is contemplated for
Inclupedia. If a page had been deleted from Wikipedia, presumably it
wouldn't have been accessible on Wikinfo unless someone had created a fork
of that page on Wikinfo prior to its deletion from Wikipedia. That's a
major absence of Inclupedia (and Deletionpedia) functionality.

Also, a high-traffic live mirror would be contrary to live
mirrorshttps://meta.wikimedia.org/wiki/Live_mirrorspolicy, as the
load on WMF servers would increase at the same rate as the
wiki's readership. A site that only polled WMF servers for changes, and
stored local copies of those changes, would not have that problem. To the
contrary, it might take load off of WMF servers, if some readers were to
retrieve mirrored Wikipedia pages from Inclupedia that they would have
otherwise retrieved from Wikipedia.

The approach used by GetWiki of combining a Wikipedia mirror with locally
stored forks is probably more suitable for either (1) a general
encyclopedia with a non-neutral viewpoint (e.g. a Sympathetic Point of
View), or (2) a site that wishes to have a narrower focus than that of a
general encyclopedia. E.g., if you ran a site like Conservapedia
(conservative bias) or Tampa Bay Wiki (narrow focus), while you were
building up the content, you might want to be able to wikilink to articles
non-existent on your local wiki, such as Florida, and dynamically pull
content from Wikipedia for users who click on those links. In the case of
Conservapedia, Wikipedia's Florida content might be considered better
than nothing, pending the creation of a forked version of that content that
would be biased conservatively. In the case of Tampa Bay Wiki, the content
of the Florida article might be sufficient for their purposes, so they
could just keep serving the mirrored content from Wikipedia forever.

If a site were to aspire to be a general encyclopedia with a neutral point
of view, it would be better to discourage or disable, as much as possible,
forking of Wikipedia's articles. Once an article is forked, it will require
duplication of Wikipedians' labor in order to keep the content as
well-written, comprehensive, well-researched, and in general as
high-quality and up-to-date as Wikipedia's coverage of the subject. It
would be better to instead mirror those articles, and have users go to
Wikipedia if they want to edit them. If users edit articles that have been
deleted from Wikipedia, on the other hand, there is no forking going on,
and therefore no duplication of labor. Unnecessary duplication of
Wikipedians' labor tends to demoralize and distract users from building up
complementary content; therefore, NPOV general encyclopedia community
builders should consider it anathema.

GetWiki and Wikinfo don't appear to have prospered. It would
seemhttp://wikiindex.org/Wikinfothat GetWiki was created in 2004 and
that Wikinfo abandoned it in March
2007 and switched to MediaWiki. According to
Wikipediahttps://en.wikipedia.org/wiki/History_of_wikis#Other_wiki_websites.2C_2001.E2.80.932003,
In 2013, the content was removed without explanation. I'm glad I didn't
devote too much labor to creating content on Wikinfo. GetWiki.net itself has
had no activity since 2012 http://getwiki.net/-changes.

The main problem with a NPOV general encylopedias' forking Wikipedia is
that, for the purposes of most people, there's not a very compelling reason
to do it. They deem the quality of the product Wikipedia offers, within the
purview of its coverage (viz. NPOV notable topics) to be good enough that
it's not worthwhile to create a fork. Wikipedia's shortcoming, from the
standpoint of inclusionists, is not so much insufficient quality of
articles, as insufficient quantity of topics covered. Forking is more
suitable for those who find the quality insufficient to meet their
particular needs.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inclupedia: Developers wanted

2014-01-14 Thread Nathan Larson
On Tue, Jan 14, 2014 at 6:22 PM, Matthew Flaschen
mflasc...@wikimedia.orgwrote:

 That's not the case.  There are components for software the WMF does not
 use.  This ranges from major projects like Semantic MW to one-off
 extensions that WMF does not have a use for (e.g. Absentee Landlord) to
 tools that work *with* MW but are not part of it (e.g. Tools Labs tools,
 Pywikibot).


I guess it would depend on making the scope of the bug broad enough that it
would seem useful for more than just one site. E.g. one could put
implement the functionality needed for Inclupedia. That functionality
could be reused for any number of sites, just like SMW's code. On the other
hand, if someone were to say Switch configuration setting x to true on
Inclupedia that would be of little interest to non-Inclupedia users, I
would think. I assume that Bugzilla is not intended as the place for all
technical requests for the entire wikisphere?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Revamping interwiki prefixes

2014-01-16 Thread Nathan Larson
On Thu, Jan 16, 2014 at 7:35 PM, This, that and the other 
at.li...@live.com.au wrote:

 I can't say I care about people reading through the interwiki list. It's
 just that with the one interwiki map, we are projecting our internal
 interwikis, like strategy:, foundation:, sulutil:, wmch: onto external
 MediaWiki installations.  No-one needs these prefixes except WMF wikis, and
 having these in the global map makes MediaWiki look too WMF-centric.


It's a WMF-centric wikisphere, though. Even the name of the software
reflects its connection to Wikimedia. If we're going to have a
super-inclusive interwiki list, then most of those Wikimedia interwikis
will fit right in, because they meet the criteria of having non-spammy
recent changes and significant content in AllPages. If you're saying that
having them around makes MediaWiki look too WMF-centric, it sounds like
you are concerned about people reading through the interwiki list and
getting a certain impression, because how else would they even know about
the presence of those interwiki prefixes in the global map?


 I don't see the need for instruction creep here.  I'm for an inclusive
 interwiki map.  Inactive wikis (e.g. RecentChanges shows only sporadic
 non-spam edits) and non-established wikis (e.g. AllPages shows little
 content) should be excluded.  So far, there have been no issues with using
 subjective criteria at meta:Talk:Interwiki map.


I dunno about that. We have urbandict: but not dramatica: both of which are
unreliable sources, but likely to be used on third-party wikis (at least
the ones I edit). We have wikichristian:
(~4,000http://www.wikichristian.org/index.php?title=Special:Statisticscontent
pages) but not rationalwiki: (
~6,000 http://rationalwiki.org/wiki/Special:Statistics content pages).
The latter was 
rejectedhttps://meta.wikimedia.org/w/index.php?title=Talk%3AInterwiki_mapdiff=4573672oldid=4572621awhile
ago. Application of the subjective criteria seems to be hit-or-miss.

If we're going to have a hyper-inclusionist system of canonical interwiki
prefixes https://www.mediawiki.org/wiki/Canonical_interwiki_prefixes, we
might want to use WikiApiary and/or WikiIndex rather than MediaWiki.org as
the venue. These wikis that already have a page for every wiki could add
another field for interwiki prefix to those templates and manage the
interwiki prefixes by editing pages. Thingles
saidhttps://wikiapiary.com/w/index.php?title=User_talk%3AThinglesdiff=409395oldid=408940he'd
be interested in WikiApiary's getting involved. The only downside is
that WikiApiary doesn't have non-MediaWiki wikis. It
soundedhttp://wikiindex.org/index.php?title=User_talk:Leucostictediff=prevoldid=144256as
though Mark Dilley might be interested in WikiIndex's playing some
role
in this too. But even WikiIndex has the problem of only containing wikis;
the table will have to have other websites as well.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Revamping interwiki prefixes

2014-01-16 Thread Nathan Larson
On Thu, Jan 16, 2014 at 10:56 PM, Tim Starling tstarl...@wikimedia.orgwrote:

 I think the interwiki map should be retired. I think broken links
 should be removed from it, and no new wikis should be added.

 Interwiki prefixes, local namespaces and article titles containing a
 plain colon intractably conflict. Every time you add a new interwiki
 prefix, main namespace articles which had that prefix in their title
 become inaccessible and need to be recovered with a maintenance script.

 There is a very good, standardised system for linking to arbitrary
 remote wikis -- URLs. URLs have the advantage of not sharing a
 namespace with local article titles.

 Even the introduction of new WMF-to-WMF interwiki prefixes has caused
 the breakage of large numbers of article titles. I can see that is
 convenient, but I think it should be replaced even in that use case.
 UI convenience, link styling and rel=nofollow can be dealt with in
 other ways.


These are some good points. I've run into a problem many times when
importing pages (e.g. templates and/or their documentation) from Wikipedia,
that pages like [[Wikipedia:Signatures]] become interwiki links to
Wikipedia mainspace rather than redlinks. Also, usually I end up accessing
interwiki prefixes through templates like
Template:whttps://meta.wikimedia.org/wiki/Template:Wanyway. It would
be a simple matter to make those templates generate URLs
rather than interwiki links. The only other way to prevent these conflicts
from happening would be to use a different delimiter besides a single
colon; but what would that replacement be?

Before retiring the interwiki map, we could run a bot to edit all the pages
that use interwiki links, and convert the interwiki links to template uses.
A template would have the same advantage as an interwiki link in making it
easy to change the URLs if the site were to switch domains or change its
URL scheme.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Revamping interwiki prefixes

2014-01-17 Thread Nathan Larson
On Fri, Jan 17, 2014 at 9:00 AM, go moko gom...@yahoo.com wrote:

 What if filling the interwiki table with predefined links was an
 installation option, possibly with several lists, and void?


Probably won't (and shouldn't) happen, since we're trying to keep the
installer options close to the bare minimum. Changing or clearing the
interwiki table is pretty easy with the right script or special page. The
fact that, when the installer's interwiki list was changed in 2013, it was
accurate to say This list has obviously not been properly updated for many
years. There are many long-dead sites that are removed in this patch
suggests that third party wikis are pretty good at ignoring interwiki links
they don't want or need.

I disagree that collisions are very rare, and none of the alternatives
seem viable or practical. Collisions (or whatever one would call them)
happen fairly often, and the resulting linking
errorshttps://en.wikiquote.org/w/index.php?title=User%3ALeucostictediff=1503289oldid=1503284can
be hard to notice because one sees the link is blue and assumes it's
going where one wanted it to.

It wouldn't be such a problem if wikis would name their project namespace
Project: rather than the name of the wiki. Having it named Project: would
be useful when people are importing user or project pages from Wikipedia
(e.g. if they wanted to import userboxes or policy pages) and don't want
the Wikipedia: links to become interwiki links. I would be in favor of
renaming the project namespaces to Project: on Wikimedia wikis; that's how
it is on MediaWiki.org (to avoid a collision with the MediaWiki: namespace)
and it seems to work out okay. I'll probably start setting up my third
party wikis that way too, because I've run into similar problems when
exporting and importing content among them. Perhaps the installer should
warn that it's not recommended to name the meta namespace after the site
name.

Tim's proposal seems pretty elegant but in a few situations will make links
uglier or hide where they point to. E.g. See also sections with interwiki
links (like what you see
herehttps://en.wikipedia.org/wiki/Help:Interwiki_linking#See_also)
could become like the Further reading section you see
herehttps://en.wikipedia.org/wiki/Wikipedia:Policies_and_guidelines#Further_readingin
which one has to either put barelinks or make people hover over the
link
to see the URL it goes to.

Interwiki page existence detection probably wouldn't be any more difficult
to implement in the absence of interwiki prefixes. We could still have an
interwiki table, but page existence detection would be triggered by certain
URLs rather than prefixes being used. I'm not sure how interwiki
transclusion would work if we didn't have interwikis; we'd have to come up
with some other way of specifying which wiki we're transcluding from,
unless we're going to use URLs for that too.

In short, I think the key is to come up with something that doesn't break
silently when there's a conflict between an interwiki prefix and namespace.
For that purpose, it would suffice to keep interwiki linking and come up
with a new delimiter. But changing the name of the Project: namespace would
work just as well. Migration of links could work analogously to what's
described in bug 60135https://bugzilla.wikimedia.org/show_bug.cgi?id=60135
.

TTO, you were saying I'm not getting a coherent sense of a direction to
take -- that could be a good thing at this point in the discussion; it
could mean people are still keeping an open mind and wanting to hear more
thoughts and ideas rather than making too hasty of a conclusion. But I
guess it is helpful, when conversations fall silent, for someone to push
for action by asking, ...so, in light of all that, what do you want to
do? :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Revamping interwiki prefixes

2014-01-17 Thread Nathan Larson
I forgot to mention, another problem is that you can't even import
Wikipedia: namespace pages to your wiki without changing your interwiki
table to get rid of the wikipedia: interwiki prefix first. The importer
will say, Page 'Wikipedia:Sandbox' is not imported because its name is
reserved for external linking (interwiki). As I
notedhttps://bugzilla.wikimedia.org/show_bug.cgi?id=60168#c2in bug
60168, it's unclear why we would standardize the other namespace
names (Help:, Template:, MediaWiki:, User:, etc.) from one wiki to the
next, but not Project:
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor exemption process (was: Re: Jake requests enabling access and edit access to Wikipedia via TOR)

2014-01-17 Thread Nathan Larson
On Fri, Jan 17, 2014 at 1:21 PM, Erik Moeller e...@wikimedia.org wrote:

 I tested the existing process by creating a new riseup.net email
 account via Tor, then requesting account creation and a global
 exemption via stewa...@wikimedia.org. My account creation request was
 granted, but for exemption purposes, I was requested to go through the
 process for any specific wiki I want to edit. In fact, the account was
 created on Meta, but not exempted there.


Thanks for taking the initiative to check that out. Now maybe the stewards
will be paranoid that any further Tor requests might be from you, and act
accordingly. It kinda reminds me of the Rosenham
Experimenthttps://en.wikipedia.org/wiki/Rosenhan_experiment.
Just the known possibility that there might be a mystery customer keeps the
service providers on their toes, and they are much more likely to mistake
an ordinary customer for the mystery customer than vice versa, as
demonstrated by the non-existent impostor
experimenthttps://en.wikipedia.org/wiki/Rosenhan_experiment#The_non-existent_impostor_experiment
.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor exemption process (was: Re: Jake requests enabling access and edit access to Wikipedia via TOR)

2014-01-17 Thread Nathan Larson
On Fri, Jan 17, 2014 at 6:33 PM, Erik Moeller e...@wikimedia.org wrote:

 It would allow a motivated person to reset their identity and go
 undetected provided they avoid the kind of articles and behaviors they
 got in trouble over in the first place. It's not clear to me that the
 consequences would be particularly severe or unmanageable beyond that.


People get banned from Wikimedia projects for off-wiki conduct too that has
nothing to do with Wikimedia projects.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Non-complicated way to display configuration for a given wiki?

2014-01-24 Thread Nathan Larson
On Fri, Jan 24, 2014 at 9:35 AM, Tim Landscheidt t...@tim-landscheidt.dewrote:

 Hi,

 to test changes to operations/mediawiki-config, I'd like to
 display the settings for a given wiki.

 So far I figured out:

 | ?php

 | $IP = 'wmf-config';
 | $cluster = 'pmtpa';

 | require ('/home/tim/public_html/w/includes/SiteConfiguration.php');
 | require ('wmf-config/wgConf.php');

 | var_dump ($wgConf-getAll ([...]));

 However, all my creativity regarding parameters to getAll()
 only returned an empty array.

 What would be the parameters needed for example to get the
 settings for de.wikipedia.org?


Are you saying you want to display the settings for your own wiki or
someone else's wiki (whose backend you can't access)? If it's your own
wiki, you can use
ViewFileshttps://www.mediawiki.org/wiki/Extension:ViewFilesto let
people access a special page that will display a configuration file.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor exemption process (was: Re: Jake requests enabling access and edit access to Wikipedia via TOR)

2014-02-01 Thread Nathan Larson
On Sat, Feb 1, 2014 at 2:20 PM, Tyler Romeo tylerro...@gmail.com wrote:

 Your scenario is based on the premise that Wikipedia vandals care enough
 about vandalizing Wikipedia that they would get in their car (assuming
 they're old enough to have a license), drive to the nearest Starbucks,
 vandalize Wikipedia, and then drive somewhere else when they are blocked.


Also, a lot of wifi hotspots such as restaurants already have had their
Wikipedia editing access blocked.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Let's improve our password policy

2014-02-05 Thread Nathan Larson
On Wed, Feb 5, 2014 at 2:58 AM, Tyler Romeo tylerro...@gmail.com wrote:

 For example, MZMcBride, what if your password is wiki, and somebody
 compromises your account, and changes your password and email. You don't
 have a committed identity, so your account is now unrecoverable. You now
 have to sign up for Wikipedia again, using the username MZMcBride2. Of
 course, all your previous edits are still accredited to your previous
 account, and there's no way we can confirm you are the real MZMcBride, but
 at least you can continue to edit Wikipedia... Obviously you are not the
 best example, since I'm sure you have ways of confirming your identity to
 the Wikimedia Foundation, but not everybody is like that. You could argue
 that if you consider your Wikipedia account to have that much value, you'd
 put in the effort to make sure it is secure. To that I say see the above
 paragraph.


What if all of the email addresses that a user has ever used were to be
stored permanently? Then in the event of an account hijacking, he could say
to WMF, As your data will confirm, the original email address for user Foo
was f...@example.com, and I am emailing you from that account, so either my
email account got compromised, or I am the person who first set an email
address for user Foo. The email services have their own procedures for
sorting out situations in which people claim their email accounts were
hijacked.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Let's improve our password policy

2014-02-06 Thread Nathan Larson
On Thu, Feb 6, 2014 at 9:58 AM, Chris Steipp cste...@wikimedia.org wrote:

 1) As I understand it, the reason we went from 0 to 1 character required is
 spammers were actively trying to find accounts with no password so they
 could edit with an autoconfirmed account. We rely on number of
 combinations of minimum passwords to be greater than number of tries
 before an IP must also solve captcha to login to mitigate some of this,
 but I think there are straightforward ways for a spammer to get accounts
 with our current setup. And I think increasing the minimum password length
 is one component.

 2) We do have a duty to protect our user's accounts with a reasonable
 amount of effort/cost proportional to the weight we put on those
 identities. I think we would be in a very difficult spot if the foundation
 tried to take legal action against someone for the actions they took with
 their user account, and the user said, That wasn't me, my account probably
 got hacked. And it's not my fault, because I did the minimum you asked me.
 So I think we at least want to be roughly in line with industry standard,
 or have a calculated tradeoff against that, which is roughly 6-8 character
 passwords with no complexity requirements. I personally think the
 foundation and community _does_ put quite a lot of weight into user's
 identities (most disputes and voting processes that I've seen have some
 component that assume edits by an account were done by a single person), so
 I think we do have a responsibility to set the bar at a level appropriate
 to that, assuming that all users will do the minimum that we ask. Whether
 it's 4 or 6 characters for us I think is debatable, but I think 1 is not
 reasonable.


1) Merely increasing the length could increase required keystrokes without
making it more secure. A couple comments from the
meetinghttps://www.mediawiki.org/wiki/Architecture_meetings/RFC_review_2014-02-05#Full_log
:
brion  ain't secure
TimStarling password isn't secure either, and that's 8

It seems to me that a pretty secure approach would be to have the system
give the user his 8-12 character password, rather than letting him pick a
password. Then we can be assured that he's not doing stuff like p@ssword
to meet the complexity requirements.

2) How plausible is this scenario you mention, involving legal action?
Has/would the WMF ever take/taken legal action against someone for actions
taken with their user account? Why would that happen, when any damage done
by a non-checkuser can generally be reverted/deleted/etc.? What would be
the remedy; trying to get money out of the person? It probably wouldn't
amount to much.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC discussion Friday 14 Feb: TitleValue

2014-02-11 Thread Nathan Larson
On Mon, Feb 10, 2014 at 11:23 PM, Sumana Harihareswara 
suma...@wikimedia.org wrote:

 The discussion will be in #wikimedia-office at 13:00 UTC on Friday the
 14th:

 http://www.timeanddate.com/worldclock/fixedtime.html?msg=RFC+reviewiso=20140214T14p1=37ah=1


Cool, now I have something to tell people when they ask what I'll be doing
for Valentine's Day.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Change the installer to make Project: the default option for meta namespace name

2014-05-03 Thread Nathan Larson
I propose that we change the installer to make Project:, rather than the
wiki name, the default option for meta namespace
Manual:$wgMetaNamespacename (i.e. namespace
4 https://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces).
See bug 60168 https://bugzilla.wikimedia.org/show_bug.cgi?id=60168. Here
is an image of what I have in mind that the installer should show:
https://upload.wikimedia.org/wikipedia/mediawiki/b/b9/ProjectDefaultNamespace.png

Rationales:
1) Standardizing the name makes life easier when people copy content from
one wiki to another. E.g., if you import a template that links to
Project:Policy rather than BarWiki:Policy, you don't need to edit the
imported template, because it already links where it should.

2) Having the meta namespace name be the same as the wiki name causes
collisions with interwiki prefixes when content is copied from one wiki to
another. E.g., suppose FooWiki has an interwiki prefix linking to BarWiki
(barwiki:) and then a user imports a template from BarWiki that links to
BarWiki:Policy. That will now link interwiki to BarWiki's mainspace page
Policy rather than to Project:Policy. It is not even possible to import
BarWiki:Policy until the interwiki barwiki: prefix is first removed; the
importer will say Page 'BarWiki:Policy' is not imported because its name
is reserved for external linking (interwiki).

3) If you ever change the name of the wiki, you then have to change all the
wikilinks to FooWiki: namespace; if it were Project:, they would stay the
same. True, you can use namespace
aliaseshttps://www.mediawiki.org/wiki/Manual:$wgNamespaceAliases,
but that could defeat part of the point of rebranding, which is to
deprecate use of the old wiki name.

4) We don't have any other namespace names (User, Template, etc.) that are,
by default, different from one wiki to the next; what is the logic for
making an exception for the meta namespace?

-- 
Nathan Larson https://mediawiki.org/wiki/User:Leucosticte
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Change the installer to make Project: the default option for meta namespace name

2014-05-03 Thread Nathan Larson
On Sat, May 3, 2014 at 7:11 PM, K. Peachey p858sn...@gmail.com wrote:

 On 4 May 2014 05:38, Nathan Larson nathanlarson3...@gmail.com wrote:
 
  4) We don't have any other namespace names (User, Template, etc.) that
 are,
  by default, different from one wiki to the next; what is the logic for
  making an exception for the meta namespace?
 
 
 Because the Template/User namespaces aren't the same thing as the project
 namespace?


The Category namespace isn't the same thing as the Template or User
namespaces either, but by default it's called by the same name from one
wiki to the next. So, what makes the meta namespace so special?

If the argument is The contents of the meta namespace differ from one wiki
to the next; therefore the name of the meta namespace should also differ,
that same argument could apply to all the other namespaces as well. The
templates of enwiktionary, for example, differ from the templates of
enwiki, but the namespace is called Template: on both.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Change the installer to make Project: the default option for meta namespace name

2014-05-03 Thread Nathan Larson
On Sat, May 3, 2014 at 11:08 PM, Max Semenik maxsem.w...@gmail.com wrote:

 This proposal makes no sense: all these namespaces _are_ dependent on wiki
 language, so even if you force Project down the throats of non-English
 users, project talk would still be e.g. Project_ахцәажәара for Abkhazian
 wikis and so on. Thus, you're attempting to disrupt MediaWiki's
 localizability for the sake of your ephemerous Inclumedia project. I don't
 think this is what our users want.


That's a non sequitur, as there has been no mention of Inclumedia anywhere
in the discussion of this bug, and this proposal has no relation to
Inclumedia. Nor does it have much relevance to to localisation, that I know
of. The option of making Project: the name of the meta namespace is already
available, and works fine when selected; this merely makes it the default
option in the installer, so as to encourage what appears to be a better
practice.

If you choose Abkhazian as the language in the installer, it will give you
option of setting the meta namespace to Проект (Abkhazian for Project).
This appears to be set in includes/installer/i18n/ru.json. In that file,
the line reads (config-ns-generic: Проект,). In MessagesAb.php and so
on, the meta talk namespace name is set by
$namespaceNames[NS_PROJECT_TALK]. In that file, it says NS_PROJECT_TALK
= '$1_ахцәажәара', If you pick that option, your meta talk namespace will
be Проект_ахцәажәара, as it should be. So it seems like this wouldn't
interfere with localisation in the way you describe.

I don't see how the proposed change forces anything down the throats of
anyone any more than the current situation does; some option or another has
to be the default, so the question is what's the best option to encourage
people to pick.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Priorities in Bugzilla

2012-11-28 Thread Nathan Larson
On Wed, Nov 28, 2012 at 12:56 PM, bawolff bawolff...@gmail.com wrote:

 What I would really like to see is banning users from touching the
 priority field. The field is made rather useless by bug reporters who
 feel their pet issue is the most important thing to ever happen - and
 suddenly there's a whole lot of issues at highest. Of course I would
 still like to see triaging people setting the field - just not the
 randoms who think by marking their bug highest priority it will
 actually be treated like highest priority.

 Although Bugzilla isn't a wiki, it might still be helpful to follow the
wiki principle of letting it remain easy to fix people's bad changes (e.g.
setting the wrong priority) rather than tying their hands from making those
changes. Could we instead draw users' attention to the page listing what
all the different priorities mean, e.g. by having Bugzilla ask Are you
sure this meets the criteria for a highest priority bug? and then linking
to http://www.mediawiki.org/wiki/Bugzilla/Fields#Priority ?

-- 
Nathan Larson
703-399-9376
http://mediawiki.org/wiki/User:Leucosticte
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] importing wikimedia dumps using mwdumper

2010-05-21 Thread Nathan Day
Hi everyone,

I am new to the mailing list. I joined because I am seeking some help with a
project I am working on, and I hope that I contribute to solving other
peoples problems as well.

I have been trying to set up a mirror site of wiktionary on my mac and have
been running onto some difficulties. Here is what I am using:

Product Version
MediaWiki 1.15.3
PHP 5.2.12 (apache2handler)
MySQL 5.1.46

Extensions
Cite (Version r47190)
ParserFunctions (Version 1.1.1)
Winter (Wiki INTERpreter) (Version 2.2.0)

The process that I have for importing the database dumps is:

download pages_articles.xml.gz2 and other .sql link tables files for
wiktionary

run mwdumper.jar file, which I have built from the source code, on the xml
file

run mysql with all sql files

run rebuildall.php to rebuild link tables

Here are the issues that I am having:

1. Articles are succesfully imported into the database, but not viewable in
mediawiki... such that i can find an article in the db for dog for
example, but I can't see that article when I enter the corresponding url

2. For the articles that do show up, the templates are not transcluding. For
an article that has Template:Hat for example, where I can see that the page
exists in the db, mediawiki is acting like the template doesn't exist.

If anyone has any experience with these kinds of problems or importing
database dumps in general, your help would be much appreciated. Thank you!

Nathan Day
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] importing wikimedia dumps using mwdumper

2010-05-21 Thread Nathan Day
Thanks for the solution Roan, that fixed the issue! Hip hip, hoorah and
three cheers for Roan.

On Fri, May 21, 2010 at 12:40 PM, Roan Kattouw roan.katt...@gmail.comwrote:

 2010/5/21 Nathan Day nathanryan...@gmail.com:
  I have been trying to set up a mirror site of wiktionary
 [snip]
  1. Articles are succesfully imported into the database, but not viewable
 in
  mediawiki... such that i can find an article in the db for dog for
  example, but I can't see that article when I enter the corresponding url
 
 You probably need to set $wgCapitalLinks = false; . Wiktionary uses
 this setting to be able to have page names with an uncapitalized first
 letter. On your wiki, this is probably set to true (the default),
 causing MediaWiki to miss the dog entry because it'll be looking for
 Dog.

 Roan Kattouw (Catrope)

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] About RESOLVED LATER

2012-11-05 Thread Nathan Larson
On Mon, Nov 5, 2012 at 5:25 PM, Quim Gil quim...@gmail.com wrote:

 What about removing the LATER resolution from our Bugzilla? It feels like
 sweeping reports under the carpet. If a team is convinced that something
 won't be addressed any time soon then they can WONTFIX. If anybody feels
 differently then they can take the report back and fix it.

 Before we could simply reopen the 311 reports filed under RESOLVED LATER:


Should we create a new status, e.g. BLOCKED or LATER (rather than RESOLVED
LATER) for these?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] About RESOLVED LATER

2012-11-13 Thread Nathan Larson
On Tue, Nov 13, 2012 at 2:01 PM, Mark A. Hershberger m...@everybody.orgwrote:

 On 11/13/2012 11:54 AM, Martijn Hoekstra wrote:
  Once no RESOLVED LATER tickets remain, I can remove the resolution.
 
  Silence means approval.
 
 
  No it doesn't.

 The above exchange really confuses me.

 I'm not sure what else silence could mean when someone explicitly tells
 you (as Andre has) If no one voices any objections, I'm going to do
 this.  Could you clarify?

 And, if you think silence doesn't mean approval, wouldn't you be better
 served by speaking up an voicing your concerns?  It certainly seems like
 you'd be better served by clearly stating your objection than by a terse
 reply such as you've given.

 Mark.


Perhaps it should've been silence means acquiescence?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Language indices problem when importing wiktionary dump

2011-02-23 Thread Nathan Day
Hello,

I have a problem with getting the language indices to work after importing
the english wiktionary database dump to a locally running mediawiki
instance.

What I would like to see is the
http://en.wiktionary.org/wiki/Index:Englishpage working at my
localhost address. This page is not found.

What I have imported is the regular articles dump file and the category
and category links dump files. Are the indexes in a seperate dump file or is
there a script to build them or something? What am I missing here? Any help
would be much appreciated and I apologize is if I didn't include all the
relevant information.

Best Regards,
Nathan Day
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Language templates missing from Mediawiki dumps

2012-02-07 Thread Nathan Day
Hi Mediawiki users,

I have experienced a repeatable problem in which after importing a
mediawiki database dump, I need to Export the language templates from
the live site and import them into my local instance. I use MWDumper
to build an SQL file which is read into my local MySql instance. The
problem is that the language templates articles are not included in
the dump.

I am referring to all templates articles that have the following form:

Template:EN

EN in this could be any language code.

This is not limited any particular Mediawiki dump, they all seem to
have this problem. That being said, it is not much to simply import
the missing templates manually, I was wondering if anyone had
experienced this problem or has a quicker solution than the manual
import/export.

Best Regards,
Nathan Day

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Language templates missing from Mediawiki dumps

2012-02-07 Thread Nathan Day
They are template articles that are missing. Not interwiki links. :-)

On Tue, Feb 7, 2012 at 9:15 AM, OQ overlo...@gmail.com wrote:
 You sure those aren't interwiki links and not templates?

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Language templates missing from Mediawiki dumps

2012-02-07 Thread Nathan Day
I did ask a closely related question several years ago and I did get a
solution to that problem. :-) The difference is that last time I was
having a problem of transclusion because the mechanism for
substituting text from a template wasn't working. That problem was
being caused by the $wgUseCapitalLinks setting. This time the
transclusion is working fine, but there is no article for the
template because it wasn't imported during the process for importing
the dump.

On Tue, Feb 7, 2012 at 9:04 AM, Tei oscar.vi...@gmail.com wrote:
 I can see that you made the same question 2 years ago and nobody helped you
  :(  on that point.

 If I remember correctly, recreating the database is uncharted waters just
 now, and theres a call for anyone tryiing to do it, to document the
 problems and solutions. To help people tryiing it on the future.

 Hope somebody can help you :D, or give you some workaround, patch or idea.

 On 21 May 2010 21:35, Nathan Day nathanryan...@gmail.com wrote:

 

 2. For the articles that do show up, the templates are not transcluding.
 For
 an article that has Template:Hat for example, where I can see that the page
 exists in the db, mediawiki is acting like the template doesn't exist.

 If anyone has any experience with these kinds of problems or importing
 database dumps in general, your help would be much appreciated. Thank you!

 Nathan Day


 On 7 February 2012 17:51, Nathan Day nathanryan...@gmail.com wrote:

 Hi Mediawiki users,

 I have experienced a repeatable problem in which after importing a
 mediawiki database dump, I need to Export the language templates from
 the live site and import them into my local instance. I use MWDumper
 to build an SQL file which is read into my local MySql instance. The
 problem is that the language templates articles are not included in
 the dump.

 I am referring to all templates articles that have the following form:

 Template:EN

 EN in this could be any language code.

 This is not limited any particular Mediawiki dump, they all seem to
 have this problem. That being said, it is not much to simply import
 the missing templates manually, I was wondering if anyone had
 experienced this problem or has a quicker solution than the manual
 import/export.

 Best Regards,
 Nathan Day



 --
 --
 ℱin del ℳensaje.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l