Re: [Wikitech-l] 100% open source stack (was Re: Bugzilla Vs other trackers.)

2010-01-09 Thread John Vandenberg
On Sat, Jan 9, 2010 at 3:49 PM, Tim Starling tstarl...@wikimedia.org wrote:
 John Vandenberg wrote:
 On Sat, Jan 9, 2010 at 12:10 PM, Tim Starling tstarl...@wikimedia.org 
 wrote:
 Platonides wrote:
 What were the reasons for replacing lighttpd with Sun Java System Web
 Server ?
 Probably the same reason that the toolserver uses Confluence instead
 of MediaWiki.

 It only contains one page, which points to the MediaWiki wiki.

 https://confluence.toolserver.org/pages/listpages-dirview.action?key=main

 I count 65 pages.

 https://confluence.toolserver.org/pages/listpages-dirview.action?key=tech

 Maybe you were confused by the unfamiliar UI.

Thanks Tim.  I should know Confluence better; we use it at work.  sigh.

 Are there plans to make greater use of the Confluence wiki?

 Certainly not.

Good to hear.

--
John Vandenberg

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Robert Rohde
On Fri, Jan 8, 2010 at 6:06 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
snip

 No one wants the monolithic tarball. The way I got updates previously
 was via a rsync push.

 No one sane would suggest a monolithic tarball: it's too much of a
 pain to produce!

I know that You didn't want or use a tarball, but requests for an
image dump are not that uncommon and often the requester is
envisioning something like a tarball.  Arguably that is what the
originator of this thread seems to have been asking for.  I think you
and I are probably mostly on the same page about the virtue of
ensuring that images can be distributed and that monolithic approaches
are bad.

snip

 But I think producing subsets is pretty much worthless. I can't think
 of a valid use for any reasonably sized subset.  (All media used on
 big wiki X is a useful subset I've produced for people before, but
 it's not small enough to be a big win vs a full copy)

Wikipedia itself has gotten so large that increasingly people are
mirroring subsets rather than allocate the space for a full mirror
(e.g. 1 pages on cooking, or medicine, or whatever).  Grabbing
images needed for such an application would be useful.  I can also see
virtues in having a way grab all images in a category (or set of
categories).  For example, grab all images of dogs, or all images of
Barack Obama.  In case you think this is all hypothetical, I've
actually downloaded tens of thousands of images on more than one
occasion to support topical projects.

snip

 If all is made available then everyone's wants can be satisfied. No
 subset is going to get us there. Of course, there are a lot of
 possibilities for the means of transmission, but I think it would be
 most useful to assume that at least a few people are going to want to
 grab everything.

Of course, strictly speaking we already provide HTTP access to
everything.  So the real question is how can we make access easier,
more reliable, and less burdensome.  You or someone else suggested an
API for grabbing files and that seems like a good idea.  Ultimately
the best answer may well be to take multiple approaches to accommodate
both people like you who want everything as well as people that want
only more modest collections.

-Robert Rohde

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Platonides
Robert Rohde wrote:
 Of course, strictly speaking we already provide HTTP access to
 everything.  So the real question is how can we make access easier,
 more reliable, and less burdensome.  You or someone else suggested an
 API for grabbing files and that seems like a good idea.  Ultimately
 the best answer may well be to take multiple approaches to accommodate
 both people like you who want everything as well as people that want
 only more modest collections.
 
 -Robert Rohde

Anthony wrote:
 The bandwidth-saving way to do things would be to just allow mirrors to use
 hotlinking.  Requiring a middle man to temporarily store images (many, and
 possibly even most of which will never even be downloaded by end users) just
 wastes bandwidth.


There is already a way to instruct a wiki to use images from a foreign
wiki as they are needed. With proper caching.

On 1.16 it will even be much easier, as you will only need to set
$wgUseInstantCommons = true; to use Wikimedia Commons images.
http://www.mediawiki.org/wiki/Manual:$wgUseInstantCommons


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] The future of action=ajax

2010-01-09 Thread Bryan Tong Minh
Hi,


As you may know there are currently two entry points in MediaWiki for
javascript that wants to perform certain actions, action=ajax and
api.php. Only the following features still use action=ajax: ajax
watch, upload license preview and upload warnings check. I don't
really see much point for two entry points where one would suffice.
These could all be readily migrated to the API. However, this would
mean that they will become unavailable if the API is disabled. Would
that considered to be a problem?


Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Chad
On Sat, Jan 9, 2010 at 7:44 AM, Platonides platoni...@gmail.com wrote:
 Robert Rohde wrote:
 Of course, strictly speaking we already provide HTTP access to
 everything.  So the real question is how can we make access easier,
 more reliable, and less burdensome.  You or someone else suggested an
 API for grabbing files and that seems like a good idea.  Ultimately
 the best answer may well be to take multiple approaches to accommodate
 both people like you who want everything as well as people that want
 only more modest collections.

 -Robert Rohde

 Anthony wrote:
 The bandwidth-saving way to do things would be to just allow mirrors to use
 hotlinking.  Requiring a middle man to temporarily store images (many, and
 possibly even most of which will never even be downloaded by end users) just
 wastes bandwidth.


 There is already a way to instruct a wiki to use images from a foreign
 wiki as they are needed. With proper caching.

 On 1.16 it will even be much easier, as you will only need to set
 $wgUseInstantCommons = true; to use Wikimedia Commons images.
 http://www.mediawiki.org/wiki/Manual:$wgUseInstantCommons


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


I'd really like to underline this last piece, as it's something I feel
we're not promoting as heavily as we should be--with 1.16 making
it a 1-line switch to turn it on, perhaps we should publicize this.
Thanks to work Brion did in 1.13 and I picked up later on, this
ability to use files from Wikimedia Commons (or potentially any
MediaWiki installation). Pointed out above, this has configurable
caching that can be set as aggressively as you'd like.

To mirror Wikipedia these days, all you'd need is the article and
template dumps, point the ForeignAPIRepos at Commons and
enwiki, and you've got yourself a working mirror. No need to dump
the images and reimport them somewhere. Cache the thumbnails
aggressively enough and you'll be hosting the images locally, in
effect.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Chad
On Sat, Jan 9, 2010 at 9:27 AM, Carl (CBM) cbm.wikipe...@gmail.com wrote:
 On Sat, Jan 9, 2010 at 8:50 AM, Anthony wikim...@inbox.org wrote:
 The original version of Instant Commons had it right.  The files were sent
 straight from the WMF to the client.  That version still worked last I
 checked, but my understanding is that it was deprecated in favor of the
 bandwidth-wasting store files in a caching middle-man.

 If I were a site admin using InstantCommons, I would want to keep a
 copy of all the images used anyway, in case they were deleted on
 commons but I still wanted to use them on my wiki.

 - Carl

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


A valid suggestion, but I think it should be configurable either
way. Some sites will like to use Wikimedia Commons but don't
necessarily have the space to store thumbnails (much less
the original sources).

However, a copy source file too option could be added in, for
sites that would also like to fetch the original source file and
then import it locally. None of this is out of the realm of
possibilities.

The main reason we went for the render there, show thumbnail
here idea was to increase compatibility. Not everyone has their
wikis set up to render things like SVGs. By rendering remotely,
you're assuming the source repo like Commons was set up to
render it (a valid assumption). By importing the image locally,
you're then possibly requesting remote files that you can't render.

Again, more configuration options for the different use cases
are possible.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of action=ajax

2010-01-09 Thread Daniel Kinzler
Bryan Tong Minh schrieb:
 Hi,
 
 
 As you may know there are currently two entry points in MediaWiki for
 javascript that wants to perform certain actions, action=ajax and
 api.php. Only the following features still use action=ajax: ajax
 watch, upload license preview and upload warnings check. 

Don't forget extensions, like categorytree

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The future of action=ajax

2010-01-09 Thread Daniel Kinzler
Bryan Tong Minh schrieb:
 Hi,
 
 
 As you may know there are currently two entry points in MediaWiki for
 javascript that wants to perform certain actions, action=ajax and
 api.php. 

Oh, also: action=ajax supports http cache control. can this be done with the api
yet?

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The future of action=ajax

2010-01-09 Thread Roan Kattouw
2010/1/9 Daniel Kinzler dan...@brightbyte.de:
 Oh, also: action=ajax supports http cache control. can this be done with the 
 api
 yet?

Yes, with the maxage and smaxage parameters. AFAIK those are currently
broken on Wikimedia, however, because Squid overrides the caching
headers set by the API.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Dmitriy Sintsov
* Gregory Maxwell gmaxw...@gmail.com [Fri, 8 Jan 2010 21:06:11 -0500]:

 No one wants the monolithic tarball. The way I got updates previously
 was via a rsync push.

 No one sane would suggest a monolithic tarball: it's too much of a
 pain to produce!

 Image dump != monolithic tarball.

Why not to extend the filerepo to make rsync or similar (maybe more 
efficient) incremental backups easy? Incremental distributed filerepo.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The future of action=ajax

2010-01-09 Thread Bryan Tong Minh
On Sat, Jan 9, 2010 at 5:03 PM, Daniel Kinzler dan...@brightbyte.de wrote:
 Bryan Tong Minh schrieb:
 Hi,


 As you may know there are currently two entry points in MediaWiki for
 javascript that wants to perform certain actions, action=ajax and
 api.php. Only the following features still use action=ajax: ajax
 watch, upload license preview and upload warnings check.

 Don't forget extensions, like categorytree

I was not yet planning to kill action=ajax itself, just all functions
that use. There is really no reason to break backwards compatibility
here.


Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Controlling of article / parser cache in extension code

2010-01-09 Thread Dmitriy Sintsov
Hi!
I have an extension, which submits and renders polls defined with parser 
tag hooks. To have the article view properly re-generate dymanical 
content of tag, the typical approach is to use $parser-disableCache() 
in hook's function. However, in my case, the dynamically generated 
content is changed only when the poll was successfully submitted. When a 
user just views the page and makes no vote, the content doesn't change, 
thus, should be cached to improve the performance.

When I comment out $parser-disableCache() line, the content of page is 
not being updated, until one purges the page manually, which is very 
unhandy. So I used to have this call unconditionally, which is 
inefficient.

But, I am (with my limited knowledge of core) cannot find out, which 
calls should I make to invalidate parser cache and article cache in the 
extension's code on demand (conditionally). In the point of code, where 
the user had successfully POSTed voting data and the results of vote 
were stored in the DB, I try to execute the following methods:

$wgArticle-doPurge();
$wgTitle-invalidateCache();

then perform a a 302 redirect to show the same title with updated poll 
results.

However, when I comment out the $parser-disableCache() line, tag 
function output content is not being displayed at all. Like if there 
were no tags at the page.

What would you suggest? My only idea to implement GET action which will 
conditionally purge the page. I don't like such approach.

Thanks,
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Unified gadgets

2010-01-09 Thread Platonides
Casey Brown wrote:
 On Fri, Jan 8, 2010 at 4:14 PM, Lars Aronsson l...@aronsson.se wrote:
 Exactly! This is poor design. I have an account (through SUL)
 on the Ukrainian Wikipedia because I sometimes add interwiki
 links there. I want the same gadgets there, but I don't speak
 Ukrainian and I can't go around bothering local admins on
 every language with this. Gadgets should follow the user, just
 like the account name and password do. There must be a better
 way than the current one.

 
 We should also make it possible to have global gadgets controlled on
 Meta-Wiki.  This would be especially useful for hiding the Fundraising
 banner. ;-)

Agree. There should be some kind of global gadgets, and also default
gadgets.
Now, we need to rephrase it in a way that it shows beneficial to dewiki,
so we can put a WM-DE employee to fulfill it ;)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Aryeh Gregor
On Fri, Jan 8, 2010 at 9:40 PM, Anthony wikim...@inbox.org wrote:
 Isn't that what the system immutable flag is for?

No, that's for confusing the real roots while providing only a speed
bump to an actual hacker.  Anyone with root access can always just
unset the flag.  Or, failing that, dd if=/dev/zero of=/dev/sda works
pretty well.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The future of action=ajax

2010-01-09 Thread Aryeh Gregor
On Sat, Jan 9, 2010 at 7:54 AM, Bryan Tong Minh
bryan.tongm...@gmail.com wrote:
 These could all be readily migrated to the API. However, this would
 mean that they will become unavailable if the API is disabled. Would
 that considered to be a problem?

No.  At this point we should remove $wgEnableAPI and set it to true
unconditionally.  Other things already randomly depend on it, like
watchlist RSS feeds.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Anthony
On Sat, Jan 9, 2010 at 11:09 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Fri, Jan 8, 2010 at 9:40 PM, Anthony wikim...@inbox.org wrote:
  Isn't that what the system immutable flag is for?

 No, that's for confusing the real roots while providing only a speed
 bump to an actual hacker.  Anyone with root access can always just
 unset the flag.  Or, failing that, dd if=/dev/zero of=/dev/sda works
 pretty well.


Depends on the machine's securelevel.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] downloading wikipedia database dumps

2010-01-09 Thread Anthony
On Sat, Jan 9, 2010 at 11:40 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Sat, Jan 9, 2010 at 11:26 PM, Anthony wikim...@inbox.org wrote:
  Depends on the machine's securelevel.

 Google informs me that securelevel is a BSD feature.  Wikimedia uses
 Linux and Solaris.


Well, Greg's comment wasn't specific to Linux or Solaris.  In any case, I
don't know about Solaris, but Linux seems to have some sort of
CAP_LINUX_IMMUTABLE and CAP_SYS_RAWIO.  I'm sure Solaris has something
similar.


 It doesn't hurt to have extra copies out there


Certainly not.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l