Re: [Wikitech-l] Help needed with reading strategy process

2015-09-23 Thread Quim Gil
On Tue, Sep 22, 2015 at 9:28 PM, Brian Wolff  wrote:

> >
> > In our own exercise, we identified one problem that manifests itself
> across
> > different indicators is our core system's lack of optimization for
> emerging
> > platforms, experiences, and communities.
> >
>
> What does that even mean?
>

Just for the record, I commented similarly at
https://www.mediawiki.org/wiki/Talk:Reading/Strategy/Strategy_Process#Definition_of_the_problem
before seeing your message here.

-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Project Idea " Extension: Offline MediaWiki "

2015-09-23 Thread Quim Gil
Adisha created https://phabricator.wikimedia.org/T113396 and I commented
there before seeing this thread.

Accidentally, I related that proposal with editing offline, which is an
interesting scenario that has in fact more than one related task in
Phabricator -- see https://phabricator.wikimedia.org/T106898#1665449

And yes, Kiwix projects are welcome to Outreachy / GSoC / etc, just like
any other projects with a connection with Wikimedia or MediaWiki. In fact,
Kiwix has been already one of the main providers of Google Code-in tasks.

On Tue, Sep 22, 2015 at 7:10 PM, adisha porwal 
wrote:

> Greeting,
> I want to contribute to wikimedia and for that Outreachy
>  intership program looks perfect fit for
> me.
>
> For participating in outreachy internship, I need a project idea that I
> will be working on during my internship period. The project idea is to
> develop a new extension to make MediaWiki available offline suggested by
> bmansurov .
>
>  Is their any existing extension or project which implements this project
> idea or similar to it?If yes, please provide link to that project.
>
> --
> Regards
> Adisha Porwal
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help needed with reading strategy process

2015-09-23 Thread C. Scott Ananian
Let's consider one of my pet bugbears: Chinese wikipedia.  Our
readership numbers are way below what we'd like, and as I understand
it, total # of editors and articles is low as well.  So obviously a
problem for the reading team, right?

However, a solution needs to grapple with the problem of creating
content for zhwiki, which would involve language engineering and the
editing team.  Handling language variants better for reading would be
good, too, but (AFAIK) we don't have a single active member of zhwiki
on staff (according to
https://office.wikimedia.org/wiki/Community_engagement/Staff_involvement),
and just a single engineer fluent in Mandarin (according to
https://office.wikimedia.org/wiki/HR_Corner/Languages). [My numbers
could be slightly off here, forgive me if so.  But clearly we don't
have a *huge presence* from zhwiki on-staff, the way we do for, say,
enwiki.]  So maybe we need to involve HR?

There are politics involved, too: perhaps the solution would involve
the Community Engagement team, to try to build up the local wikipedia
community and navigate the politics?

My point is that even a narrow focus on increasing page views fails to
address the more fundamental issues responsible, which spill outside
of the team silo.  So a strategy session isolated to the reading team
risks either missing the forest for the trees (concentrating only on
problems solvable locally), or else generating a lot of problems and
discussion on issues which can't be addressed without involving the
wider organization.  (I rather expected to see the former, but most of
the issues currently on
https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process seem
to be the latter.)

I think a strategy process probably needs a mix of both near- and
far-sightedness.  Identifying issues which can be solved by the team
itself  (better engagement with users, for example), but also having a
process for escalating issues that require a more organizational
response.  The latter seems especially important for a team composed
mostly of remote workers, since there aren't the same informal
watercooler-talk mechanisms available for building awareness of
broader needs.
 --scott

-- 
(http://cscott.net)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Project Idea " Extension: Offline MediaWiki "

2015-09-23 Thread C. Scott Ananian
Note that I presented a tutorial at the latest wikimania which created
a simple offline version of wikipedia in ~100 lines of code:
https://phabricator.wikimedia.org/T105175

That code could be further developed into a proper tool, with
user-configurable offlining, offline editing, etc.  One of the slides
in my presentation outlined a decent number of "next steps" that could
be done.

But I personally would love to see development of the zimwriter for
OCG, which would allow us to restore the "download as ZIM" option for
Kiwix. A start at that code is at
https://github.com/cscott/mw-ocg-zimwriter but it needs to be
finished.

Editing offline is an interesting challenge.  It might be subsumed on
the back-end by the real-time collaboration work, since that will
introduce more fine-grained mechanisms for merging changes.  But
actual implementations in the field are always useful, even if limited
(for example, limited to edits where the article has not been modified
by anyone else while the editor was offline), since getting people to
actually use a tool like this always helps us learn more about how it
*should* work.
 --scott

On Wed, Sep 23, 2015 at 6:47 AM, Quim Gil  wrote:
> Adisha created https://phabricator.wikimedia.org/T113396 and I commented
> there before seeing this thread.
>
> Accidentally, I related that proposal with editing offline, which is an
> interesting scenario that has in fact more than one related task in
> Phabricator -- see https://phabricator.wikimedia.org/T106898#1665449
>
> And yes, Kiwix projects are welcome to Outreachy / GSoC / etc, just like
> any other projects with a connection with Wikimedia or MediaWiki. In fact,
> Kiwix has been already one of the main providers of Google Code-in tasks.
>
> On Tue, Sep 22, 2015 at 7:10 PM, adisha porwal 
> wrote:
>
>> Greeting,
>> I want to contribute to wikimedia and for that Outreachy
>>  intership program looks perfect fit for
>> me.
>>
>> For participating in outreachy internship, I need a project idea that I
>> will be working on during my internship period. The project idea is to
>> develop a new extension to make MediaWiki available offline suggested by
>> bmansurov .
>>
>>  Is their any existing extension or project which implements this project
>> idea or similar to it?If yes, please provide link to that project.
>>
>> --
>> Regards
>> Adisha Porwal
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
>
>
> --
> Quim Gil
> Engineering Community Manager @ Wikimedia Foundation
> http://www.mediawiki.org/wiki/User:Qgil
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
(http://cscott.net)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Importing XML Dumps - templates not working

2015-09-23 Thread v0id null
Thanks for the input everyone. I was not aware that importing the XML dumps
was so involved.

In the end I used xml2sql, but it required two patches, and a bit more work
on my end, to get it to work. I also had to strip out the
 tag from the xml dump. But nevertheless it is very
fast.

For those wondering, I'm toying around with an automated news categorizer
and wanted to use Wikinews as a corpus. Not perfect, but this is just
hobbyist level stuff here. I'm using nltk so I wanted to keep things
python-centric, but I've written up a PHP script that runs as a simple tcp
server that my python script can connect to and ask for the HTML output. My
python script first downloads mediawiki, the right xml dump, unzips
everything, sets up LocalSettings.php, compiles xml2sql, runs it then
imports the sql into the database. So essentially automates making an
offline installation of what I assume is any mediawiki xml dump. Then it
starts that simple PHP server (using plain sockets), and just sends it page
IDs and it responds with the fully rendered HTML including headers and
footers.

I figure this approach, I can run a few forks on the python and php side to
speed up the process.

then I use python to parse through the HTML to get whatever I need from the
page, which for now are the categories and the article content, which I can
then use to train classifiers from nltk.

maybe not the easiest approach, but it does make it easy to use. I've
looked at the python parsers but none of them seem like they will be as
successful or as correct as using Mediawiki itself.

---alex

On Tue, Sep 22, 2015 at 11:09 PM, gnosygnu  wrote:

> Hi alex. I added some notes below based on my experience. (I'm the
> developer for XOWA (http://gnosygnu.github.io/xowa/) which generates
> offline wikis from the Wikimedia XML dumps) Feel free to follow up on-list
> or off-list if you are interested. Thanks.
>
> On Mon, Sep 21, 2015 at 3:09 PM, v0id null  wrote:
>
> > #1: mwdumper has not been updated in a very long time. I did try to use
> it,
> > but it did not seem to work properly. I don't entirely remember what the
> > problem was but I believe it was related to schema incompatibility.
> xml2sql
> > comes with a warning about having to rebuild links. Considering that I'm
> > just in a command line and passing in page IDs manually, do I really need
> > to worry about it? I'd be thrilled not to have to reinvent the wheel
> here.
> >
>
>
> > #2: Is there some way to figure it out? as I showed in a previous reply,
> > the template that it can't find, is there in the page table.
> >
> > As brion indicated, you need to strip the namespace name. The XML dump
> also has a "namespaces" node near the beginning. It lists every namespace
> in the wiki with "name" and "ID". You can use a rule like "if the title
> starts with a namespace and a colon, strip it". Hence, a title like
> "Template:Date" starts with "Template:" and goes into the page table with a
> title of just "Date" and a namespace of "10" (the namespace id for
> "Template").
>
>
> > #3: Those lua modules, are they stock modules included with the mediawiki
> > software, or something much more custom? If the latter, are they
> available
> > to download somewhere?
> >
> > Yes, these are articles with a title starting with "Module:". They will
> be
> in the pages-articles.xml.bz2 dump. You should make sure you have Scribunto
> set up on your wiki, or else it won't use them. See:
> https://www.mediawiki.org/wiki/Extension:Scribunto
>
>
> > #4: I'm not any expert on mediawiki, but it seems when that the titles in
> > the xml dump need to be formatted, mainly replacing spaces with
> > underscores.
> >
> > Yes, surprisingly, the only change you'll need to make is to replace
> spaces with underscores.
>
> Hope this helps.
>
>
> > thanks for the response
> > --alex
> >
> > On Mon, Sep 21, 2015 at 3:00 PM, Brion Vibber 
> > wrote:
> >
> > > A few notes:
> > >
> > > 1) It sounds like you're recreating all the logic of importing a dump
> > into
> > > a SQL database, which may be introducing problems if you have bugs in
> > your
> > > code. For instance you may be mistakenly treating namespaces as text
> > > strings instead of numbers, or failing to escape things, or missing
> > > something else. I would recommend using one of the many existing tools
> > for
> > > importing a dump, such as mwdumper or xml2sql:
> > >
> > >
> https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps#Using_mwdumper
> > >
> > > 2) Make sure you've got a dump that includes the templates and lua
> > modules
> > > etc. It sounds like either you don't have the Template: pages or your
> > > import process does not handle namespaces correctly.
> > >
> > > 3) Make sure you've got all the necessary extensions to replicate the
> > wiki
> > > you're using a dump from, such as Lua. Many templates on Wikipedia call
> > Lua
> > > modules, and won't work without them.
> > >
> 

Re: [Wikitech-l] Importing XML Dumps - templates not working

2015-09-23 Thread v0id null
Looking at https://www.mediawiki.org/wiki/Parsoid/Setup

It seems that I need a web server set up for mediawiki, and nodejs and I'd
have to go through the Parsoid API which I guess is going through Meidawiki
anyhow.

Right now I use xpath to find everything I need. Getting categories for
example is as simple as:

$xpath = new DOMXPath($dom);
$contents = $xpath->query("//div[@id='mw-normal-catlinks']//li/a");

$categories = [];

foreach ($contents as $el) {
$categories[] = $el->textContent;
}

Is there information that Parsoid makes available that isn't available from
Mediawiki output directly?

thanks,
-alex


On Wed, Sep 23, 2015 at 2:49 PM, C. Scott Ananian 
wrote:

> You might consider pointing a Parsoid instance at your "simple PHP
> server".  Using the Parsoid-format HTML DOM has several benefits over
> using the output of the PHP parser directly.  Categories are much
> easier to extract, for instance.
>
> See
> https://commons.wikimedia.org/wiki/File%3ADoing_Cool_Things_with_Wiki_Content_(Parsoid_Power!).pdf
> (recording at https://youtu.be/3WJID_WC7BQ) and
> https://doc.wikimedia.org/Parsoid/master/#!/guide/jsapi for some more
> hints on running queries over the Parsoid DOM.
>  --scott
>
> On Wed, Sep 23, 2015 at 2:25 PM, v0id null  wrote:
> > Thanks for the input everyone. I was not aware that importing the XML
> dumps
> > was so involved.
> >
> > In the end I used xml2sql, but it required two patches, and a bit more
> work
> > on my end, to get it to work. I also had to strip out the
> >  tag from the xml dump. But nevertheless it is very
> > fast.
> >
> > For those wondering, I'm toying around with an automated news categorizer
> > and wanted to use Wikinews as a corpus. Not perfect, but this is just
> > hobbyist level stuff here. I'm using nltk so I wanted to keep things
> > python-centric, but I've written up a PHP script that runs as a simple
> tcp
> > server that my python script can connect to and ask for the HTML output.
> My
> > python script first downloads mediawiki, the right xml dump, unzips
> > everything, sets up LocalSettings.php, compiles xml2sql, runs it then
> > imports the sql into the database. So essentially automates making an
> > offline installation of what I assume is any mediawiki xml dump. Then it
> > starts that simple PHP server (using plain sockets), and just sends it
> page
> > IDs and it responds with the fully rendered HTML including headers and
> > footers.
> >
> > I figure this approach, I can run a few forks on the python and php side
> to
> > speed up the process.
> >
> > then I use python to parse through the HTML to get whatever I need from
> the
> > page, which for now are the categories and the article content, which I
> can
> > then use to train classifiers from nltk.
> >
> > maybe not the easiest approach, but it does make it easy to use. I've
> > looked at the python parsers but none of them seem like they will be as
> > successful or as correct as using Mediawiki itself.
> >
> > ---alex
> >
> > On Tue, Sep 22, 2015 at 11:09 PM, gnosygnu  wrote:
> >
> >> Hi alex. I added some notes below based on my experience. (I'm the
> >> developer for XOWA (http://gnosygnu.github.io/xowa/) which generates
> >> offline wikis from the Wikimedia XML dumps) Feel free to follow up
> on-list
> >> or off-list if you are interested. Thanks.
> >>
> >> On Mon, Sep 21, 2015 at 3:09 PM, v0id null  wrote:
> >>
> >> > #1: mwdumper has not been updated in a very long time. I did try to
> use
> >> it,
> >> > but it did not seem to work properly. I don't entirely remember what
> the
> >> > problem was but I believe it was related to schema incompatibility.
> >> xml2sql
> >> > comes with a warning about having to rebuild links. Considering that
> I'm
> >> > just in a command line and passing in page IDs manually, do I really
> need
> >> > to worry about it? I'd be thrilled not to have to reinvent the wheel
> >> here.
> >> >
> >>
> >>
> >> > #2: Is there some way to figure it out? as I showed in a previous
> reply,
> >> > the template that it can't find, is there in the page table.
> >> >
> >> > As brion indicated, you need to strip the namespace name. The XML dump
> >> also has a "namespaces" node near the beginning. It lists every
> namespace
> >> in the wiki with "name" and "ID". You can use a rule like "if the title
> >> starts with a namespace and a colon, strip it". Hence, a title like
> >> "Template:Date" starts with "Template:" and goes into the page table
> with a
> >> title of just "Date" and a namespace of "10" (the namespace id for
> >> "Template").
> >>
> >>
> >> > #3: Those lua modules, are they stock modules included with the
> mediawiki
> >> > software, or something much more custom? If the latter, are they
> >> available
> >> > to download somewhere?
> >> >
> >> > Yes, these are articles with a title starting with "Module:". They
> will
> >> be
> >> in the 

Re: [Wikitech-l] Importing XML Dumps - templates not working

2015-09-23 Thread C. Scott Ananian
You might consider pointing a Parsoid instance at your "simple PHP
server".  Using the Parsoid-format HTML DOM has several benefits over
using the output of the PHP parser directly.  Categories are much
easier to extract, for instance.

See 
https://commons.wikimedia.org/wiki/File%3ADoing_Cool_Things_with_Wiki_Content_(Parsoid_Power!).pdf
(recording at https://youtu.be/3WJID_WC7BQ) and
https://doc.wikimedia.org/Parsoid/master/#!/guide/jsapi for some more
hints on running queries over the Parsoid DOM.
 --scott

On Wed, Sep 23, 2015 at 2:25 PM, v0id null  wrote:
> Thanks for the input everyone. I was not aware that importing the XML dumps
> was so involved.
>
> In the end I used xml2sql, but it required two patches, and a bit more work
> on my end, to get it to work. I also had to strip out the
>  tag from the xml dump. But nevertheless it is very
> fast.
>
> For those wondering, I'm toying around with an automated news categorizer
> and wanted to use Wikinews as a corpus. Not perfect, but this is just
> hobbyist level stuff here. I'm using nltk so I wanted to keep things
> python-centric, but I've written up a PHP script that runs as a simple tcp
> server that my python script can connect to and ask for the HTML output. My
> python script first downloads mediawiki, the right xml dump, unzips
> everything, sets up LocalSettings.php, compiles xml2sql, runs it then
> imports the sql into the database. So essentially automates making an
> offline installation of what I assume is any mediawiki xml dump. Then it
> starts that simple PHP server (using plain sockets), and just sends it page
> IDs and it responds with the fully rendered HTML including headers and
> footers.
>
> I figure this approach, I can run a few forks on the python and php side to
> speed up the process.
>
> then I use python to parse through the HTML to get whatever I need from the
> page, which for now are the categories and the article content, which I can
> then use to train classifiers from nltk.
>
> maybe not the easiest approach, but it does make it easy to use. I've
> looked at the python parsers but none of them seem like they will be as
> successful or as correct as using Mediawiki itself.
>
> ---alex
>
> On Tue, Sep 22, 2015 at 11:09 PM, gnosygnu  wrote:
>
>> Hi alex. I added some notes below based on my experience. (I'm the
>> developer for XOWA (http://gnosygnu.github.io/xowa/) which generates
>> offline wikis from the Wikimedia XML dumps) Feel free to follow up on-list
>> or off-list if you are interested. Thanks.
>>
>> On Mon, Sep 21, 2015 at 3:09 PM, v0id null  wrote:
>>
>> > #1: mwdumper has not been updated in a very long time. I did try to use
>> it,
>> > but it did not seem to work properly. I don't entirely remember what the
>> > problem was but I believe it was related to schema incompatibility.
>> xml2sql
>> > comes with a warning about having to rebuild links. Considering that I'm
>> > just in a command line and passing in page IDs manually, do I really need
>> > to worry about it? I'd be thrilled not to have to reinvent the wheel
>> here.
>> >
>>
>>
>> > #2: Is there some way to figure it out? as I showed in a previous reply,
>> > the template that it can't find, is there in the page table.
>> >
>> > As brion indicated, you need to strip the namespace name. The XML dump
>> also has a "namespaces" node near the beginning. It lists every namespace
>> in the wiki with "name" and "ID". You can use a rule like "if the title
>> starts with a namespace and a colon, strip it". Hence, a title like
>> "Template:Date" starts with "Template:" and goes into the page table with a
>> title of just "Date" and a namespace of "10" (the namespace id for
>> "Template").
>>
>>
>> > #3: Those lua modules, are they stock modules included with the mediawiki
>> > software, or something much more custom? If the latter, are they
>> available
>> > to download somewhere?
>> >
>> > Yes, these are articles with a title starting with "Module:". They will
>> be
>> in the pages-articles.xml.bz2 dump. You should make sure you have Scribunto
>> set up on your wiki, or else it won't use them. See:
>> https://www.mediawiki.org/wiki/Extension:Scribunto
>>
>>
>> > #4: I'm not any expert on mediawiki, but it seems when that the titles in
>> > the xml dump need to be formatted, mainly replacing spaces with
>> > underscores.
>> >
>> > Yes, surprisingly, the only change you'll need to make is to replace
>> spaces with underscores.
>>
>> Hope this helps.
>>
>>
>> > thanks for the response
>> > --alex
>> >
>> > On Mon, Sep 21, 2015 at 3:00 PM, Brion Vibber 
>> > wrote:
>> >
>> > > A few notes:
>> > >
>> > > 1) It sounds like you're recreating all the logic of importing a dump
>> > into
>> > > a SQL database, which may be introducing problems if you have bugs in
>> > your
>> > > code. For instance you may be mistakenly treating namespaces as text
>> > > strings instead of numbers, or 

Re: [Wikitech-l] GSoC & Outreachy IRC Showcase

2015-09-23 Thread Niklas Laxström
2015-09-22 13:37 GMT+03:00 Niharika Kohli :

> last but not the least, an awesome new search feature for TranslateWiki.
>

The new features are not only for translatewiki.net, they also work on all
Wikimedia sites (and other wikis) where Translate is installed.

For example, https://meta.wikimedia.org/wiki/Special:SearchTranslations

  -Niklas
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Do you run mediawiki on shared hosting? Tell us about it

2015-09-23 Thread Gilles Dubuc
I've created this task on the topic of shared hosting:
https://phabricator.wikimedia.org/T113210 as a proposal for the upcoming
Wikimedia Developer Summit.

If anyone on this list is currently running mediawiki on a shared hosting
platform, I would really like to hear from you on that topic, either on the
list or on that phabricator task. So far all the discussions on that
subject I've seen seemed to be done "on behalf" of people relying on those
platforms, and I have yet to hear direct testimonies and opinions from
people who are running mediawiki that way.

The main questions I have for people in that situation are whether there
are blockers for them to move to more modern vm-based or container-based
hosting platforms, or if they prefer to stay on shared hosting for specific
reasons, etc. Basically, tell us why you're on shared hosting for your
mediawiki install(s).
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do you run mediawiki on shared hosting? Tell us about it

2015-09-23 Thread Yongmin Hong
I once ran mw 1.19 in 2013 or before under the shared hosting service,
since at that moment I was fully n00b about git, php(I'm still newbie when
it comes to coding), and mariadb/mysql.

Not everybody who is using mediawiki is tech-savvy like they know how to
execute shell commands, how to secure their VM/Container against various
attacks, etc etc .

PS. I'm not using shared hosting now. My Linode VPS is running Ubuntu 14.04
LTS as of today, and I'm not going back to the shared hosting.

--
revi
https://revi.me
-- Sent from Android, forgive my top-posting. --
2015. 9. 23. 오후 5:47에 "Gilles Dubuc" 님이 작성:

> I've created this task on the topic of shared hosting:
> https://phabricator.wikimedia.org/T113210 as a proposal for the upcoming
> Wikimedia Developer Summit.
>
> If anyone on this list is currently running mediawiki on a shared hosting
> platform, I would really like to hear from you on that topic, either on the
> list or on that phabricator task. So far all the discussions on that
> subject I've seen seemed to be done "on behalf" of people relying on those
> platforms, and I have yet to hear direct testimonies and opinions from
> people who are running mediawiki that way.
>
> The main questions I have for people in that situation are whether there
> are blockers for them to move to more modern vm-based or container-based
> hosting platforms, or if they prefer to stay on shared hosting for specific
> reasons, etc. Basically, tell us why you're on shared hosting for your
> mediawiki install(s).
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do you run mediawiki on shared hosting? Tell us about it

2015-09-23 Thread Tim Landscheidt
Yongmin Hong  wrote:

> I once ran mw 1.19 in 2013 or before under the shared hosting service,
> since at that moment I was fully n00b about git, php(I'm still newbie when
> it comes to coding), and mariadb/mysql.

> Not everybody who is using mediawiki is tech-savvy like they know how to
> execute shell commands, how to secure their VM/Container against various
> attacks, etc etc .

> PS. I'm not using shared hosting now. My Linode VPS is running Ubuntu 14.04
> LTS as of today, and I'm not going back to the shared hosting.

> [...]

Note Gilles's somewhat hidden clue that this might still af-
fect you:

| [...]

| The main questions I have for people in that situation are whether there
| are blockers for them to move to more modern vm-based or container-based
   ^^^
| hosting platforms, or if they prefer to stay on shared hosting for specific
  ^
| reasons, etc. Basically, tell us why you're on shared hosting for your
| mediawiki install(s).

So if your Linode VPS doesn't allow you to run VMs or con-
tainers, you might be forced to rent real iron if MediaWiki
would require virtualization in some form or another.

Tim


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do you run mediawiki on shared hosting? Tell us about it

2015-09-23 Thread Brian Wolff
On 9/23/15, Gilles Dubuc  wrote:
> I've created this task on the topic of shared hosting:
> https://phabricator.wikimedia.org/T113210 as a proposal for the upcoming
> Wikimedia Developer Summit.
>
> If anyone on this list is currently running mediawiki on a shared hosting
> platform, I would really like to hear from you on that topic, either on the
> list or on that phabricator task. So far all the discussions on that
> subject I've seen seemed to be done "on behalf" of people relying on those
> platforms, and I have yet to hear direct testimonies and opinions from
> people who are running mediawiki that way.
>
> The main questions I have for people in that situation are whether there
> are blockers for them to move to more modern vm-based or container-based
> hosting platforms, or if they prefer to stay on shared hosting for specific
> reasons, etc. Basically, tell us why you're on shared hosting for your
> mediawiki install(s).
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

I feel like the types of people who use shared hosting are very
unlikely to be following this list.

--
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Reading web dashboard

2015-09-23 Thread Jon Robson
The reading web dashboard -
https://phabricator.wikimedia.org/dashboard/manage/125/ - has been
updated to have panels allowing you to easily find easy patches to
work on (in priority order) and code to review.

I'm using the code to review column as part of Gerrit cleanup day
since it seems to be a more reliable mechanism to identify what needs
reviewing.

Please add it to your bookmarks so we are all on the same page. I'm
going to be encouraging the Wikimedia reading web team to use this
frequently in our standups.

Happy coding reviewing/submitting! :)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Importing XML Dumps - templates not working

2015-09-23 Thread C. Scott Ananian
On Wed, Sep 23, 2015 at 3:27 PM, v0id null  wrote:
> Is there information that Parsoid makes available that isn't available from
> Mediawiki output directly?

Yes, certainly.
https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec should give
you an idea.

One (of many) examples are comments in wikitext, which are stripped
from PHP output but present in Parsoid HTML.

Your question really is, "is there information *I need* that Parsoid
makes available that isn't available from Mediawiki output directly?"
I don't know the answer to that.  Probably the right thing is to keep
going with the implementation you've got, but if you get stuck keep in
the back of your mind that switching to Parsoid might help.
 --scott

-- 
(http://cscott.net)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] Only one week left for Individual Engagement Grant proposals

2015-09-23 Thread Quim Gil
Hi, the new round of Wikimedia Individual Engagement Grants
 is open until 29 Sep. For the
first time, technical projects are within scope, thanks to the feedback
received at Wikimania 2015, before, and after (T105414
). If someone is interested in
obtaining funds to push this task, this might be a good way.

On Tue, Sep 22, 2015 at 10:31 PM, Marti Johnson 
wrote:

> There is still one week left to submit Individual Engagement Grant
> (IEG) proposals
>   before the September 29th deadline.
> If you have ideas for new tools, community-building processes, and other
> experimental projects that enhance the work of Wikimedia volunteers, start
> your proposal today!  Please encourage others who have great ideas to apply
> now as well.  Support is available if you want help turning your idea into
> a grant request.
>
>-
>
>Submit a grant request <
>https://meta.wikimedia.org/wiki/Grants:IEG#ieg-apply>
>-
>
>Get help with your proposal in IdeaLab <
>https://meta.wikimedia.org/wiki/Grants:IdeaLab>
>-
>
>Learn from examples of completed Individual Engagement Grants <
>https://meta.wikimedia.org/wiki/Grants:IEG#ieg-engaging>
>
> Put your idea into motion, and submit your proposal this week! <
> https://meta.wikimedia.org/wiki/Grants:IEG#ieg-apply>
>
>
> Kind regards,
>
> Marti
>
> *Marti JohnsonProgram Officer*
> *Individual Grants*
> *Wikimedia Foundation *
> +1 415-839-6885
> Skype: Mjohnson_WMF
>
> Imagine a world in which every single human being can freely share
>  in the sum of all knowledge.  Help us make
> it
> a reality!
> Support Wikimedia 
>
>
> Date: Mon, 31 Aug 2015 19:29:44 -0500
> From: Chris Schilling 
> To: wikimedi...@lists.wikimedia.org
> Subject: [Wikimedia-l] Open call for Individual Engagement Grant proposals
>
> Hi everyone,
>
> This is Chris Schilling (User:I JethroBT).  The Wikimedia Foundation
> Individual Engagement Grants (IEG) program is accepting proposals for
> funding new experiments from August 31st to September 29th.  <
> https://meta.wikimedia.org/wiki/Grants:IEG>  As a former grantee in
> developing the Co-op mentorship space, I encourage you to explore IEG as a
> way to realize your idea for improving Wikimedia projects. <
> https://meta.wikimedia.org/wiki/Grants:IEG/Reimagining_Wikipedia_Mentorship
> >
>
> Your idea could involve building new tools or software, organizing a better
> process on your wiki, conducting research to investigate an important
> issue, or providing other support for community-building.Whether you need a
> small or large amount of funds (up to $30,000 USD) Individual Engagement
> Grants can support you and your team’s project development time in addition
> to other expenses like travel, materials, and rental space. Project
> schedules and reporting are flexible for grantees, and staff are available
> on Meta to support you through all stages of your project.
>
> Do you have have an idea, but would like some feedback before applying? Put
> it into the IdeaLab, where volunteers and staff can give you advice and
> guidance on how to bring it to life.  <
> https://meta.wikimedia.org/wiki/Grants:IdeaLab>  Once your idea is ready,
> it can be easily migrated into a grant request.  IEG Program Officer Marti
> Johnson and I will also be hosting several Hangouts for real-time
> discussions to answer questions and help you make your proposal better -
> the first will happen on September 8th. <
> https://meta.wikimedia.org/wiki/Grants:IdeaLab/Events#Upcoming_events>
>
> Please feel free to get in touch with Marti (mjohn...@wikimedia.org) or me
> with questions about getting started with your idea.
>
> We are excited to see your grant ideas that will support our community and
> make an impact on the future of Wikimedia projects.  Put your idea into
> motion, and submit your proposal this September! <
> https://meta.wikimedia.org/wiki/Grants:IEG#ieg-apply>
>
>
> With thanks,
>
>
> Chris
>
> --
> Chris "Jethro" Schilling
> I JethroBT (WMF) 
> Community Organizer, Wikimedia Foundation
> 
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> wikimedi...@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 




-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org

Re: [Wikitech-l] HTMLForm and default values

2015-09-23 Thread Ricordisamoa

Finally found a task for this: https://phabricator.wikimedia.org/T38210

Il 16/09/2015 20:27, Ricordisamoa ha scritto:
For https://gerrit.wikimedia.org/r/233423 I need 'default' values of 
HTMLForm fields to overwrite values set via query parameters, when the 
latter are set to empty strings. Do you know a clean way to do it?

Thanks in advance.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] 2015-09-23 Scrum of Scrums notes

2015-09-23 Thread Grace Gellerman
https://www.mediawiki.org/wiki/Scrum_of_scrums/2015-09-23
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [reading-wmf] Reading web dashboard

2015-09-23 Thread Dan Garry
That's awesome. Nice work.

Dan

On 23 September 2015 at 12:55, Jon Robson  wrote:

> The reading web dashboard -
> https://phabricator.wikimedia.org/dashboard/manage/125/ - has been
> updated to have panels allowing you to easily find easy patches to
> work on (in priority order) and code to review.
>
> I'm using the code to review column as part of Gerrit cleanup day
> since it seems to be a more reliable mechanism to identify what needs
> reviewing.
>
> Please add it to your bookmarks so we are all on the same page. I'm
> going to be encouraging the Wikimedia reading web team to use this
> frequently in our standups.
>
> Happy coding reviewing/submitting! :)
>
> ___
> reading-wmf mailing list
> reading-...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/reading-wmf
>



-- 
Dan Garry
Lead Product Manager, Discovery
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l