James Linden wrote:
Why do you need to access the live wikipedia for this?
Using categorylinks.sql and page.sql you should be able to fetch the
same data. Probably faster.
In my research, the answer to this question is two-fold
A) Creating a local copy of wikipedia (using mediawiki and
Lewis Cawte wrote:
YAY Is it me, or was teh staff page out of date? I thought he was
still CTO :/
He was removed from there in October 2009:
http://wikimediafoundation.org/w/index.php?title=Staffdiff=40480oldid=40399
___
Wikitech-l mailing list
Roan Kattouw wrote:
My apologies for cross-posting, but it's my opinion the awesomeness of
this news makes up for it :)
The link to the blog post below is wrong, it should be
http://blog.wikimedia.org/blog/2011/03/07/brion-vibber-rejoins-wikimedia-foundation/
Roan Kattouw (Catrope)
caoyanjiao987 wrote:
Dear Mr/Miss:
Sorry to interrupt you but there are two problems to ask you for
help. It puzzles us for a long time.
We built a local wiki using mediawiki by downloading the page
articles fromhttp://download.wikimedia.org/enwiki. However, the data
ashish mittal wrote:
I got a local copy of MediaWiki and have installed it. I want to start
getting to grip with the architecture of MediaWiki.
I saw that MediaWiki has already started preparing for SoC 2011 [1]. I have
been through some documentations and this year’s project ideas. I am
Paul Houle wrote:
A peasant http://en.wikipedia.org/wiki/Peasant girl born in eastern
France
you note that A peasant girl == :Joan_of_arc and that a more specific
birthplace can be found in the infobox.
You will find that the infoboxes are the best article pieces to mine.
Happy-melon wrote:
Then let's get a new deadline in place. What's holding us back from
timetabling a 1.17wmf2?? It strikes me that the features that will appear
in that are fundamentally different from the blockers on the 1.17.0 tarball,
and as you say, all the focus seems to be on that
With no pressing timelines, we are slacking off again.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Paul Houle wrote:
Hi, I've been thinking about the early history of Wikipedia and
about what which sort of topics got written early on. I'm wondering if
there is an easy way to find the first N wikipedia topics (where N is
say 100,000) in the order they were created.
For which
Brion Vibber wrote:
I would definitely recommend this -- it's been on the agenda for well
literally for *years*, but always got swallowed up by time spent on other
things.
It should be pretty straightforward actually to aim a few of those
standalone wikis straight at the existing
Ryan Chan wrote:
I an not sure if it is the right place to ask this.
I got the source of fss_prep_replace at
http://opensees.berkeley.edu/wiki/extensions/FastStringSearch/fss.c
But are there any Perl or Python implementations?
Thanks.
The original source is
MZMcBride wrote:
Platonides wrote:
CORS does seem to be the way to go. I have drafted a new proposal below
which attempts to fix several bug in our way of doing central login.
Two questions and a comment about this.
First, would this impede the ability to switch to an AJAX login interface
Ryan Lane wrote:
I don't think we should encourage people to run trunk in production.
We should encourage people to run release candidates in production,
and possibly betas for those that know the software *really* well. We
should likely encourage people to run trunk on their live testing
Jay Ashworth wrote:
should be possible != is a good idea.
Just sayin'
Cheers,
-- jra
Specially when we are not there yet ;)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Erik Moeller wrote:
The experience that Sage describes in bug 24471 of not being
consistently logged in arguably shouldn't occur in the first place per
1). Can we log the user into _all_ public Wikimedia wikis without
incurring an unacceptable performance penalty? If so, how?
We can do it if
Anthony Ventresque (Dr) wrote:
Hi,
I've found something strange in some files. The maximum ids for a page are:
latest
pages-articles.xml: 29189922
page.sql: 28707562
categorylinks.sql: 28705949
(15,684 categories and 135,521 articles are missing)
2011-01-15
Aryeh Gregor wrote:
I've used git a lot (I use it for everything I want to version) and
Mercurial a fair bit (the W3C seems to like it), and I *strongly*
prefer git. Major problems I have with Mercurial:
1) It doesn't support lots of useful functionality that's built into
git unless you
Daniel Friesen wrote:
However a key thing to note is that git submodules aren't anything
really special. Sure, they're integrated into git, but the only real
special feature about them is that you can target a specific commit
id... and heck, we don't even want that feature, that's the whole
Roan Kattouw wrote:
2011/2/14 Mark A. Hershberger mhershber...@wikimedia.org:
If we have a 1.18 branch that is, as Brion has noted (and supported), a
day or two behind trunk at most, is there a reason that the we couldn't
branch wmfN from the rolling 1.18 branch? Or even just tag it when we
Amir E. Aharoni wrote:
2011/2/10 Daniel Friesen li...@nadir-seen-fire.com
Since we've already got some of these, would you mind getting me some
aggregate data for skin preferences.
In particular I'd like to know how many users across Wikimedia (en.wp,
maybe commons too might be enough for
Sage Ross wrote:
On Wed, Feb 9, 2011 at 9:49 PM, Howie Fung hf...@wikimedia.org wrote:
Just making sure I understand the data below. I'm assuming this means
there are 13,959,842 total accounts in the English Wikipedia?
Interesting because there are a total of 651,652 cumulative New
Anthony Ventresque (Dr) wrote:
Hi,
I am trying to build an offline version of the wikipedia categorisation tree.
As usual with projects on wikipedia, I've downloaded dumps (actually the
interesting one here is pages-articles.xml). And I found that none of the
dumps has the relation
Ilmari Karonen wrote:
I'm not particularly familiar with the parser, but I suspect that this
would require doing at least some parts of link parsing _during_ brace
expansion, rather than in a separate pass after it. Which is probably
not trivial, but probably not quite impossible either.
Daniel Friesen wrote:
I've been making some improvements to the skin system for awhile now, so...
If you have a MediaWiki skin you've built, feel free to bring out it's
source code. Currently most of the few custom skins that exist are
floating around the Internet, and they are various
MZMcBride wrote:
My intention wasn't to come across as dismissive. On the other hand, if
people begin new conversations without having read the old conversations, it
sets back progress dramatically. The opening post didn't make any mention of
the old bugs or their progress, so I was trying to
the existing databse abstraction layer. I
use the method DatabaseBase::sourceFile which in turn calls
DatabaseBase::sourceStream. The problem is now, that some of the SQL INSERT
statements seem to be too long for this method.
Platonides pointed me to the source of the problem (thanks
River Tarnell wrote:
What do you think it should be set to? Gmane retains the original
Reply-To header from the mail (which is set to the list address by
Mailman), but this means that anyone who replies to a Usenet article by
email will actually end up replying to the mailing list. If
Maciej Jaros wrote:
Can you set a different deploy date for different projects? E.g. 18.00
UTC for Poland. I will not be able to be there when hell will brake
loose as I will be working and I'm sure most of the Polish tech admins
will be too. Note that we was able test Vector with current
Jay Asworth wrote:
As long as the proxy supports IPv6, it can continue to talk to Apache
via IPv4; since WMF's internal network uses RFC1918 addresses, it
won't be affected by IPv4 exhaustion.
It might; how would a 6to4NAT affect blocking?
If the XFF header is right, from mediawiki POV an
Trevor Parscal wrote:
There are 2 components to the JavaScriptDistiller library. One of them (the
ParseMaster class) is 100% in sync with the official distribution. The other
(the JavaScriptDistiller class) was originally based on the
JavaScriptPacker::_basicCompression function. That
Daniel Friesen wrote:
setHook (old style tag hooks), and setFunctionTagHook (new style function tag
hooks).
setHook and setFunctionTagHook both set tag style hooks. Originally we
just had setHook, it had one short argument list. Later on that argument
list was changed to add $frame to
jida...@jidanni.org wrote:
OK, I'm pretty happy with
$ svn diff -r HEAD RELEASE-NOTES
However that gives a backwards view,
--- RELEASE-NOTES (revision 81238)
+++ RELEASE-NOTES (working copy)
I want
--- RELEASE-NOTES (working copy)
+++ RELEASE-NOTES (revision 81238)
But
$
Daniel Friesen wrote:
I think there's a little more difference between setHook and
setFunctionTagHook than you mention.
At the very least, extensionSubstitution outputs a function tag hook
directly, while putting a normal tag hook into the general strip state
and outputting a marker.
Daniel Friesen wrote:
An interesting idea just popped into my head, as a combination of my
explorations through the dom preprocessor and my attempt at deferring
editsection replacement till after parsing is done so that skins can
modify the markup used in an editsection link in a
Ilmari Karonen wrote:
Hmm... I don't really know what's going on inside PHP's PCRE
implementation, but you might want to try replacing that regexp with:
$parser-add( '/\\/\\*.*?\\*\\//s' );
The add()s are combined into a single big regex, you can't set dot-all.
Doing it with (?s) may be
MZMcBride wrote:
There was previous discussion about this, but more discussion is needed,
apparently. Is site-wide CSS the best way to do this? Would a toggle on the
file description page make more sense? User preference?
MZMcBride
It's easy to add such preference as a gadget.
The default thread stack for Apache binary is 256Kb [1]
However, apr_thread_create() allows to use a different stack size
(apr_threadattr_stacksize_set).
The value used is stored in the global variable ap_thread_stacksize
which can be set in ThreadStackSize at httpd.conf
MZMcBride wrote:
jida...@jidanni.org wrote:
$ w3m -dump http://en.wikipedia.org/wiki/Flatworm |head
Flatworm
Simple typo in a template, fixed by OverlordQ:
http://en.wikipedia.org/w/index.php?diff=410094043oldid=408536727
Valid HTML comments in wikitext do not appear in the page source
Jan Paul Posma wrote:
I completely missed the You can edit the article below, by clicking on
blue elements in the page. line. Only found after thinking this needs
some kind of notice on how to edit, since it's not clear what to do to
change the page
in usability testing I also found
Had LST used section name=foo /section to mark sections,
instead of section begin=foo /contentsection end=foo /, it
would be as easy as traversing the preprocessor output, which would
already have the sections splitted.
Alex Brollo wrote:
2011/1/25 Alex Brollo alex.bro...@gmail.com
Just to
Happy-melon wrote:
Eeeww
What's any different between this and a {{#author: }} parser function apart
from the inability to access it from the wikitext? As noted, it's perfectly
possible for the data to be in a separate field on the upload form, either
by default or by per-wiki
Brion Vibber wrote:
On Mon, Jan 24, 2011 at 2:08 PM, Conrad Irwin conrad.ir...@gmail.comwrote:
Out of interest, do you know what percentage of emails in the database
don't validate under the new scheme?
That's actually a wise thing to check -- most fails will probably be
legitimately
Krinkle wrote:
Before I respond to the recent new ideas, concepts and suggestions.
I'd like to
explain a few things about the backend (atleast the way it's currently
planned to be)
The mw_authors table contains unique authors by either a name or a
userid.
And optionally a custom
Aryeh Gregor wrote:
When I load their homepage, the formulas don't appear for about two
seconds of 100% CPU usage, on Firefox 4b9. And that's for two small
formulas. I'm not impressed. IMO, the correct way forward is to work
on native MathML support -- Gecko and WebKit both support it these
Jan Paul Posma wrote:
Hey all,
I've been working on the InlineEditor extension again, primarily working on a
new interface that doesn't use the different edit modes anymore, as the
usability testing showed that this was not the right approach. Luckily,
without a change in the underlying
Ashar Voultoiz wrote:
Hello,
I have made a mistake Saturday evening (around 18:30 UTC) which broke
some SUL-related functions. The issue was fixed by Apergos about 1 hour
later while I was out of home.
Don't worry more about it, Ashar.
I think the need for --wiki aawiki is fixed in
An internally handled parser function doesn't conflict with showing it
as a textbox.
We could for instance store it as a hidden page prefix.
Data stored in the text blob:
Author: [[Author:Bryan]]
License: GPL
---
{{Information| This is a nice picture I took }}
{{Deletion request|Copyvio from
Ryan Lane wrote:
For the past month or so I've been working on an extension to manage
OpenStack (Nova), for use on the Wikimedia Foundation's upcoming
virtualization cluster:
http://ryandlane.com/blog/2011/01/02/building-a-test-and-development-infrastructure-using-openstack/
I've gotten
Krinkle wrote:
So PHP would extract {{#author:4}} and {{#license:12}} from the
textblob when showing the editpage.
And show the remaining wikitext in the textarea and the author/
license as seperate form elements.
And upon saving, generate {{#author:4}} {{#license:12}}\n again and
Roan Kattouw wrote:
2011/1/21 Platonides platoni...@gmail.com:
Conceptually, revision table shouldn't link to file_props. file_props
should be linked with image instead.
Maybe, but the current image/oldimage schema resembling cur/old is
horrible. For instance, there is no way to uniquely
Roan Kattouw wrote:
2011/1/21 Platonides platoni...@gmail.com:
If we wanted to map it to a page/revision format, it seems quite
straightforward. I'm missing something, right?
You're missing that migrating a live site (esp. Commons, with 8
million image rows and ~750k oldimage rows) from
Roan Kattouw wrote:
2011/1/21 Platonides platoni...@gmail.com:
Do we agree in the target db schema?
That's the important point.
We haven't thought about it in detail. But it would be a fairly large
change and require changes throughout the software, as well as
possibly elsewhere
Bryan Tong Minh wrote:
Hello,
As you may have noticed, Roan, Krinkle and me have started to more
tightly integrate image licensing within MediaWiki. Our aim is to
create a system where it should be easy to obtain the basic copyright
information of an image in a machine readable format, as
masti wrote:
On 01/18/2011 12:30 AM, Lars Aronsson wrote:
On 01/17/2011 11:36 PM, masti wrote:
what is the reason and what it can bring to the community?
I tried to describe this. The task of finding out the
history of a part of an article is very time consuming
for long articles with a
Magnus Manske wrote:
On my usual test article [[Paris]], the slowest section (History)
parses in ~5 sec (Firefox 3.6.13, MacBook Pro). Chrome 10 takes 2
seconds. I believe these will already be acceptable to average users;
optimisation should improve that further.
Cheers,
Magnus
What
Jérémie Roquet á écrit:
2011/1/11 Ilmari Karonen nos...@vyznev.net:
On 01/11/2011 11:59 AM, Jérémie Roquet wrote:
And there's a handy property to determine if you have new messages:
http://en.wikipedia.org/w/api.php?action=querymeta=userinfouiprop=hasmsg
Unfortunately (or fortunately),
Jeroen De Dauw wrote:
Hey,
My point is that code review is an extension, so AFAIK should use $eg, not
$wg.
Cheers
[[Coding conventions]] was wrong. See r70755
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
So we finally have a bugmeister. I'm sure he will find very useful the
Code Review experience from the last weeks.
Ashar Voultoiz wrote:
Have fun Mark :-)
Yes, have fun :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Ashar Voultoiz wrote:
On 26/12/10 01:49, Platonides wrote:
Earlier today, /a filled with binlogs in db27, which was s3 s7 master.
nagios had warned too early / nobody noticed. Slaves lagged, lots of
locks, the wikis got to a halt.
Would it be possible to automatically move the binlogs from
Brion Vibber wrote:
On Wed, Jan 12, 2011 at 5:53 PM, Jeroen De Dauw jeroended...@gmail.comwrote:
Hey,
I think display map would be parsed as the tag hook display with a
parameter map=map. Would this prevent any use of the hook registered as
the tag display map...?
This code has been
Beebe, Mary J wrote:
GetLinksTo() seems to be returning no results even though there are image
pages with links to them. It seems to be a problem with the select statement
within the File class. I looked at the query and if I run the query within
mySQL it works if I remove the extra
Ilmari Karonen wrote:
On 01/11/2011 11:59 AM, Jérémie Roquet wrote:
And there's a handy property to determine if you have new messages:
http://en.wikipedia.org/w/api.php?action=querymeta=userinfouiprop=hasmsg
Unfortunately (or fortunately), userinfo cannot be retrived using jsonp [1].
[1]
Nadeesha Weerasinghe wrote:
Test plans for the following MediaWiki extensions are available at,
Cite :
http://www.mediawiki.org/wiki/Cite_Extension_Test_Plan
ConfirmEdit : http://www.mediawiki.org/wiki/ConfirmEdit_Test_Plan
Test scenarios which can be automated are
Well, I just had an issue with Validator, so I am not too sympatatic
with your extension right now ;)
After grepping for setHook, it turns out that an extension like Maps,
that has zero matches, sets parser hooks indirectly via Validator
extension. And not only that, but it also sets a hook for a
Chad wrote:
David Gerard wrote:
You're just saying that because pirates stole all the well-formed XML.
Real pirates use serialized PHP objects.
-Chad
Can Pirate Roberts be considered a Real Pirate or does account sharing
disqualify him?
___
Mark A. Hershberger wrote:
Perhaps this is where we can cooperate more with other Wiki writers to
develop a common Wiki markup. From my brief perusal of efforts, it
looks like there is a community of developers involved in
http://www.wikicreole.org/ but MediaWiki involvement is lacking
Tei wrote:
The last time I tried to search something special about PHP (how to
force a garbage recollection in old versions of PHP) there was very
few hits on google, or none.
Maybe that was because PHP only has garbage recollection since 5.3 :)
For reference:
Alex Brollo wrote:
Thanks Roan, your statement sound very alarming for me; I'll open a specific
thread about into wikisource-l quoting this talk. I'm doing any efford to
avoid server/history overload, since I know that I am using a free service
(I just fixed {{loop}} template to optimize it
Billinghurst wrote:
Following on from a recent discussion here, I have been trying to
watch the WMF world from a secure login.
First statement is that it is problematic as so many links fail in the
interwiki space. I cannot work out why some links to other wikis work
fine and always
Aryeh Gregor wrote:
On Fri, Dec 31, 2010 at 6:25 PM, Krinkle krinklem...@gmail.com wrote:
I doubt the addition of overflow:hidden has this consequence since
that has been broadly tested
in all kinds of browsers and has been default on several wikis for a
long while.
IIRC, overflow: hidden
masti wrote:
On 12/31/2010 01:02 AM, Platonides wrote:
There's an extension to 'delete' pages by blanking. I find that approach
much more wiki.
if you like to be blocked for blanking ...
masti
If it was the right way of deleting, it would actually be the way
specified by the policy
Marc Riddell wrote:
Hello,
I have been a WP editor since 2006. I hope you can help me. For some reason
I no longer have Section Heading titles showing in the Articles. This is
true of all Headings including the one that carries the Article subject's
name. When there is a Table of Contents,
Anthony wrote:
I'll work on a list. Are these going to be hosted somewhere? It
would be nice for me to have an offsite backup. Then I'd feel more
comfortable tossing the bz2 files once I've recompressed them to xz.
I should mention that these were collected from all over the Internet,
Neil Kandalgaonkar wrote:
I have been thinking along these lines too, although in a more haphazard
way.
At some point, if we believe our community is our greatest asset, we
have to think of Wikipedia as infrastructure not only for creating high
quality articles, but also for generating
Ryan Kaldari wrote:
Actually, I would implement hot articles per WikiProject. So, for
example, you could see the 5 articles under WikiProject Arthropods that
had been edited the most in the past week. That should scale well. In
fact, I would probably redesign Wikipedia to be
Neil Kandalgaonkar wrote:
On 12/30/10 10:24 AM, Platonides wrote:
Neil Kandalgaonkar wrote:
At some point, if we believe our community is our greatest asset, we
have to think of Wikipedia as infrastructure not only for creating high
quality articles, but also for generating and sustaining
Aryeh Gregor wrote:
We could also try to work out ways to make adminship less important.
If protection, blocking, and deletion could be made less necessary and
important in day-to-day editing, that would reduce the importance of
admins and reduce the difference between established and new
Alex wrote:
One thing that I think could help, at least on the English Wikipedia,
would be to further restrict new article creation. Right now, any
registered user can create a new article, and according to some
statistics I gathered a few months ago[1], almost 25% of new users make
their
masti wrote:
That is true - We can't do away with Wikitext always been the
intermediate conclusion (in between My god, we need to do something
about this problem and This is hopeless, we give up again).
between wikitext and WYSISWYG is a simple solution of colourizing text
like for
masti wrote:
That is true - We can't do away with Wikitext always been the
intermediate conclusion (in between My god, we need to do something
about this problem and This is hopeless, we give up again).
between wikitext and WYSISWYG is a simple solution of colourizing text
like for
Billinghurst wrote:
Is it this parsing issue or a similar rendering issue that also is the cause
for the book
tool not working on transcluded pages at Wikisource?
As per https://bugzilla.wikimedia.org/show_bug.cgi?id=21653
Regards, Andrew
No. It's a problem with the collection
Earlier today, /a filled with binlogs in db27, which was s3 s7 master.
nagios had warned too early / nobody noticed. Slaves lagged, lots of
locks, the wikis got to a halt.
Revisions between 6:50 and 8:20 pm UTC were lost (although they can be
manually reimported from db27).
The new s3 and s7
Domas Mituzas wrote:
It looks interesting. There are some places where mediawiki could take
that shortcut if available.
It wouldn't be a shortcut if you had to establish another database connection
besides existing one.
I was assuming usage of pfsockopen(), of course.
Nikola Smolenski wrote:
I have recently encountered this text in which the author claims very
high MySQL speedups for simple queries (7.5 times faster than MySQL,
twice faster than memcached) by reading the data directly from InnoDB
where possible (MySQL is still used for writing and for
Soxred93 wrote:
Before going into too much detail on the thread, consider what you actually
need out of a fancy directory iterator. Offhand, I really can't think of
many places where that even *happens* in MediaWiki... maybe when purging
thumbnails?
I count 10 instances of opendir() exactly
Zak Greant (Foo Associates) wrote:
Greetings All,
I've been editing http://www.mediawiki.org/wiki/Unit_Testing (and am
happy for feedback and suggestions.)
Hello Zak,
Looks good overall, but there seem to be a bug with the
SeleniumFramework line :)
While editing, I took at look at the
Billinghurst wrote:
I am guessing that the search engine does not transclude pages before it
undertakes it
indexing function. Is someone able to confirm that for me?
Is there any fix that anyone can suggest, or even know where such an issue
can be raised
beyong Bugzilla? Would a fix
Diederik van Liere wrote:
To continue the discussion on how to improve the performance, would it be
possible to distribute the dumps as a 7z / gz / other format archive
containing multiple smaller XML files. It's quite tricky to split a very
large XML file in smaller valid XML files and if
Diederik van Liere wrote:
Which dump file is offered in smaller sub files?
http://download.wikimedia.org/enwiki/20100904/
Also see http://wikitech.wikimedia.org/view/Dumps/Parallelization
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Gabriel Weinberg wrote:
md5sum doesn't match. I get e74170eaaedc65e02249e1a54b1087cb (as
opposed to 7a4805475bba1599933b3acd5150bd4d
on http://download.wikimedia.org/enwiki/20101011/enwiki-20101011-md5sums.txt
).
I've downloaded it twice now and have gotten the same md5sum. Can anyone
else
Roan Kattouw wrote:
I'm not sure how hard this would be to achieve (you'd have to
correlate blob parts with revisions manually using the text table;
there might be gaps for deleted revs because ES is append-only) or how
much it would help (my impression is ES is one of the slower parts of
our
Monica shu wrote:
Hi emijrp,
Here is my dump's info:
*enwiki-latest-pages-articles.xml.bz2 *
*a3a5ee062abc16a79d111273d4a1a99a*
Thanks~
I can't find such md5 on any dump.
Here are the md5s of the latest enwiki pages-articles:
a9506e8aedd3b830e059b7c8a3c0dbcd
Ben Schwartz wrote:
Hi all,
I'd like to make it easier for novice users to create Sign Language
definition pages with videos for en.wiktionary's new Sign gloss:
namespace. It's already possible to create such pages, but it requires a
large number of steps, which can deter potential
Ilmari Karonen wrote:
Technically, one could already turn a style sheet into an extension by
bundling it with a short PHP file, but that's still unnecessarily
complicated. It would be better if we could just tell wiki owners to
download the CSS file and drop it into the right (common or
Daniel Friesen wrote:
PHP - XSL doesn't quite feel like much of an improvement in terms of
cutting down on the verbose redundant code boilerplate required to
insert something.
ie: xsl:value-of select=title/ doesn't look much better than ?php
$this-text(title) ?, as opposed to
Bryan Tong Minh wrote:
On Tue, Dec 7, 2010 at 4:26 PM, Platonides platoni...@gmail.com wrote:
Daniel Friesen wrote:
PHP - XSL doesn't quite feel like much of an improvement in terms of
cutting down on the verbose redundant code boilerplate required to
insert something.
ie: xsl:value
sure though that this would be good for your sanity. I
wouldn't discard the idea immediately, insane as it may seem.
^_^ I was drafting a response Platonides' comment, ie: an example of a
chunk of MonoBook code using a WikiText style template language... in
order to demonstrate the insanity
Brion Vibber wrote:
Offhand suggestion: can we pack/compress the language files in a way that
keeps them smaller on the server but leaves them usable?
-- brion
We can provide them gzipped and require them with compress.zlib://
prepended to the filename.
That will work magically™ as far as
Niklas Laxström wrote:
This suggestion seems to come up from time to time. I feel it is
unrealistic. First of all we can't remove them from svn, since they
have to be there. We could remove them from the tarballs, but please,
last time I checked the tarball was hardly over 12 megs. Even with
Niklas Laxström wrote:
On 6 December 2010 17:02, Platonides wrote:
Niklas Laxström wrote:
A few days ago the issue came up where I was talking with an end user
who was complaining about MediaWiki being too large (in the server, not
in the tarball) compared to other apps like wordpress.
I
701 - 800 of 1204 matches
Mail list logo