Hello,
In the list=tag API query module, the tag source type named "extension" is
being renamed to "software" [1]. As of MediaWiki 1.42, "extension" still
appears alongside "software" in the tag source lists but is deprecated [2].
In future versions of MediaWiki, the "extension" entries will no
Well, changing something in core and breaking a production extenison doing
something silly can't be waived away with "it's the extension's problem" ;)
I mostly use "final" to enforce a delegation pattern, where only certain
key bits of functionality should be filled in by subclasses. It mostly
The modifySimpleRelayEvent() method was narrowly intended (and only usable)
for use with a WANObjectCache that uses EventRelayer. The later dependency
has since been removed from WANObjectCache. It was part of an experimental
approach for relaying object cache purges accross WMF datacenters, which
You can add me to the patch. I might be able to get around to looking at
it this week.
On Sat, Oct 15, 2016 at 11:47 AM, Tony Thomas <01tonytho...@gmail.com>
wrote:
> PIng again on this one, as we need review on
> https://gerrit.wikimedia.org/r/#/c/304692, which is 2/3 of the shift to
>
As of 950cf6016c, the mediawiki/core repo was updated to use DB_REPLICA
instead of DB_SLAVE, with the old constant left as an alias. This is part
of a string of commits that cleaned up the mixed use of "replica" and
"slave" by sticking to the former. Extensions have not been mass
converted. Please
Have you tried setting something like:
$wgJobTypeConf['default']['claimTTL'] = 3600;
Jobs are not normally retried by default, only archived and deleted...maybe
that default should change.
--
View this message in context:
Maybe pages using some of the properties at
https://commons.wikimedia.org/wiki/Commons:Wikidata have links that are
tracking in the parser output from rendering wikidata pages. If so, then
they'd go in the imagelinks table and globalimagelinks too. This could be
useful for checking usage before
A few things to note:
* APC is not LRU, it just detects expired items on get() and clears
everything when full (https://groups.drupal.org/node/397938)
* APC has a low max keys config on production, so using key-per-item would
require that to change
* Implementing LRU groups for BagOStuff would
It seems worth looking into PEAR mail in my opinion. There something to be
said for a certain minimalism in libraries.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/Making-a-plain-MW-core-git-clone-not-be-installable-tp5029976p5030123.html
Sent from the Wikipedia Developers
I suppose that naming scheme is reasonable.
$contentsRevId sounds awkward, maybe $sourceRevId or $originRevId is better.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5029674.html
Sent from the Wikipedia
FlaggedRevs uses the NewRevisionFromEditComplete hook. Grepping for that, I
see reasonable values set in the callers at a quick glance. This cover
various null edit scenarios too. The $baseRevId in WikiPage is just one of
the cases of that value passed to the hook, and is fine there (being mostly
Yes it was for auto-reviewing new revisions. New revisions are seen as a
combination of (base revision, changes). If the base revision was reviewed
and the user is trusted, then so is the new revision. MW core had the
obvious cases of rollback and null edits, which are (base revision, no
changes).
I'd suggest a revert from the branch, yes.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/Config-class-and-1-23-tp5026223p5026236.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
___
Wikitech-l mailing
What if someone -1's due to something in the summary? It's odd that fixing it
with a new commit would still show -1 on the reviewer's dashboard. I'm fine
with it for automatic rebases though.
--
View this message in context:
I'd agree with the general statement on inheritance (which can have weird
coupling and diamonds of doom) and hooks (which can lead to hard-to-specify
behavior and tangle). I'm not familiar with the main problem with
SpecialPage having been articulated though.
--
View this message in context:
The doEdit() call needs to parse and reuses $wgParser, which is already in
use so it probably breaks the state of it. Maybe you could use a
DeferredUpdate to actually to the edits, or do them via an api.php request,
or stash $wgParser, replace it with a new one before doing the edit and then
swap
They often give the same results on smallish wikis, but I wouldn't carry that
over to test wikis unless lots of content and users, logging, other table
data was somehow imported in. For example a tiny user table might make mysql
start INNER JOINs with that table in queries where it would never do
I'm not dead set against it, but there are some problems I see with it:
a) It's not well maintained nor documented as people don't really consider
it when changing anything. A concerned volunteer could manage this probably.
For example, dealing with HTTPs could be documented better and the code
As the last person to maintain that code, I tend to agree with this.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/File-cache-HTTPS-question-tp5014197p5014229.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
Adding a method to do that to DeferredUpdates would nice, assuming it would
batch the jobs by type when pushing them.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/What-are-DeferredUpdates-good-for-tp5013179p5013510.html
Sent from the Wikipedia Developers mailing list
Until what? A timestamp? That would be more complex and prone to over/under
guessing the right delay (you don't know how long it will take to commit). I
think deferred updates are much simpler as they will just happen when the
request is nearly done, however long that takes.
--
View this
Speaking of the job queue, deferred updates are useful for adding jobs that
depend on data that was not yet committed. This can easily be an issue since
we normally wrap web requests in one DB transaction and commit at the very
end. If you push() some jobs before the commit, and they get run
Closure changes and traits would indeed be really nice.
Short array syntax is a plus too.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/PHP-5-4-we-wish-tp5006788p5006809.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
I generally agree with 2-8, and 10. I think points 2 and 10 are pretty
subjective and must be applied very pragmatically.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/Architecture-Guidelines-Writing-Testable-Code-tp5006129p5006712.html
Sent from the Wikipedia Developers
I agree it would be nice for our repos (or git-review setup steps) would have
sane defaults instead of ones almost everyone will want to change to not be
annoyed.
--
View this message in context:
Sounds fine by me.
--
View this message in context:
http://wikimedia.7.x6.nabble.com/Can-we-kill-wgPasswordSalt-tp5005998p5006001.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
___
Wikitech-l mailing list
To use redis as a cache you can have something like:
// requires phpredis extension for PHP
$wgObjectCaches['pecl-redis'] = array(
'class' = 'RedisBagOStuff',
'servers' = array( '127.0.0.1:6379' ),
);
$wgMainCacheType = 'pecl-redis';
This would also require that the redis
2.2.2 of the extensions works for me. I downloaded it from source and
compiled it.
The redis server itself will need to be 2.6 or higher for the job queue.
Looking around, I forgot to mention that JobQueueRedis was actually removed
from 1.21 (though it's in master and will be in 1.22).
--
Note that if you already use memcached for the main cache, there isn't really
any reason to switch to redis unless you need replication or persistence.
Anyway, to use it for sessions, if you had $wgSessionCacheType explicitly
set to something, then you'd need to change that too (like to
Indeed, the queue cannot use memcached. Redis will trivialize the time spent
on actually queue operations, which could help if that is a bottleneck for
job runners. If the actual jobs themselves are slow, of course it won't help
too much.
Have you already tried setting the job run rate to 0 and
Sounds like a site config issue. All wikis that have NS_TEMPLATE in
$wgFlaggedRevsNamespaces should also have NS_MODULE in there.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/Flagged-revs-and-Lua-modules-tp4999685p497.html
Sent from the Wikipedia Developers mailing
Sounds like https://gerrit.wikimedia.org/r/#/c/41932/
--
View this message in context:
http://wikimedia.7.n6.nabble.com/MediaHandler-Stream-Headers-tp4998162p4998308.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
___
Yes a654a6e79adc8f4730bb69f79e0b6a960d7d3cbe should be fixed. It should add
the nullLockManager back.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/NullLockManager-and-the-math-extension-tp4995536p4995767.html
Sent from the Wikipedia Developers mailing list archive at
nullLockManager is defined in Setup.php.The code:
LockManagerGroup::singleton()-get( 'nullLockManager' );
... works fine in eval.php and is used in production.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/NullLockManager-and-the-math-extension-tp4995536p4995539.html
Sent
+1
I think everything into Q3 looks like a good way to proceed forward. There
might be an interesting division of labor on getting these things done
(parsiod job handling, Cite extension rewrite, API batching). I'd be willing
to help in areas I'd be useful in. I think this is ambitious, but the
I'd strongly suggest considering this kind of approach.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/Limiting-storage-generation-of-thumbnails-without-loss-of-functionality-tp4994447p4994493.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
Do you want the files or not?
The first post sounds like you don't, in that case you'd need to truncate
the image/oldimage/archive tables. This will remove all registration of the
files. Clearing memcached (or whatever cache you use) might be needed to.
You can copy the files over with
Some notes (copied from private email):
* It only creates the lock file the first time.
* The functions with different bits are not just the same thing with more
bits. Trying to abstract more just made it more confusing.
* The point is to also have something with better properties than uniqid.
I share some blame for the existence of this thread. I spotted the git author
issue after that commit was merged and was too lazy to revert and fix it. I
personally tend to dislike reverting stuff way more than I should (like a
prior schema change that was merged without the site being updated). I
RDBStore is shelfed as a reference for now. The idea was to partition sql
table across multiple DB servers using a consistent hash of some column.
There no longer would be the convenience of autoincrement columns so UIDs
are a way to make unique ids without a central table or counter.
In some
I'm seconding that recommendation to be clear. More specifically, I'd suggest
that the AFT classes have two new protected methods:
* getSlaveDB() - wrapper for wfGetLBFactory()-getExternalLB(
$wgArticleFeedBackCluster )-getConnection( DB_SLAVE, array(), $wikiId )
* getMasterDB() - wrapper for
In order to make commits that change the schema as discoverable is possible,
I'd propose the following guidelines:
* Update RELEASE-NOTES to briefly mention the change (this is not always
necessary for follow up changes within the same release, since people
upgrading don't care about intra-release
So have 2+ quality levels and sometimes want quality versions to be the
default over checked ones? I guess the closest thing to that would be to
restrict who can review/autoreview certain pages via Special:Stabilization.
--
View this message in context:
I agree that begin()/commit() should do what they say (which they do now).
I'd like to have another construct that behaves like how those two used to
(back when there were immediate* functions). Callers would then have code
like:
$db-enterTransaction()
... atomic stuff ...
$db-exitTransaction()
When a page reaches X level of quality, that version becomes the default.
When creating all the interface message for editing, viewing, and history,
this is definitely not easy to get right and keep simple for new users.
Anyway, to be clear, you can make the latest version the default for all
I'd have to see what you are doing to see if rollback is really needed.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/Nested-database-transactions-tp4983700p4984075.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
That text should be removed from the help page. Only the current or the
latest reviewed version can be the default. You cannot have pages use the
latest quality version as the default version. This would create a very
confusing interface that takes a mouth full to explain.
Also, it's hard enough
I was just looking through those classes again.
I think ORMRow is generally OK, since it's mostly a simple CRUD wrapper to
deal with some of the busy-work of making data access objects. I don't
really get the summary (updateSummaries/inSummaryMode) stuff though. I
guess the callers/subclasses do
The counter idea kind of reminds of what I have in
https://gerrit.wikimedia.org/r/#/c/16696/ .
I think the whole implicit commit issue is definitely pretty annoying and I
wish there was a reasonable way to address it without reasonable backwards
compatibility. rollback() is the hard case to deal
It would be uber sweet someday to kill the ocaml dependency.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/MathJax-scalable-math-rendering-update-for-1-19-tp4556544p4557134.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
By 99%, I meant 99% of users don't care (much or at all) about the things
these crazy templates tend to offer, they just want to read the article. I
remember Domas complaining about this last hackathon when...fixing...ocwiki.
Any article that uses slow templates that take forever to render is
+1. SERIOUSLY. This always comes to mind when these issues come up. Editors
and readers (on cache miss) shouldn't have to suffer through this. We
shouldn't forget the 99% :)
--
View this message in context:
http://wikimedia.7.n6.nabble.com/Some-questions-about-switch-tp4350750p4354000.html
Sent
My inclination is to get rid of the feature for pretty much the reason you
mentioned.
In any case, is the point to avoid notices getting added to HTML for wikis
with certain extensions? I don't know why a production site would be set to
spew notices. Either error log settings or
So as I already mentioned, this would force developers from turning off the
whole thing.
When I mentioned $wgDevelopmentWarnings, I was talking about production,
which is I why I had different paragraphs. The last one was about
development.
Anyway, one can bump the version in a wfDeprecated()
I'll still be doing some work on FileBackend like fixing jenkins, cleaning up
the streamFile() function, copying the swift backend to /trunk, and tying up
a fix loose ends and doing fixes next week. I think heavy stuff is pretty
much out of the way.
--
View this message in context:
I'd agree with reducing the state within Title and narrowing down it's
purpose to title sanitization/validation and such.
--
View this message in context:
http://wikimedia.7.n6.nabble.com/Title-objects-as-value-objects-split-with-WikiPage-Article-family-tp3007667p3231236.html
Sent from the
I've stated in CR that I don't think light light green has any real cultural
issue. It shouldn't be ruled out on those grounds. Red, on the other hand,
has stronger connotations. Part of it comes from the fact that GUIs vry
often have certain color standards, e.g.:
Blue: notice, fyi, please
I forgot to mention a useful link, http://www.mediawiki.org/wiki/FileBackend.
Currently, just Tim and I have been using that page to record thoughts and
design decisions.
Aaron Schulz wrote:
I'm starting to finish the initial coding for the FileBackend branch (see
https://svn.wikimedia.org
of bit
rot if they are (maintenance would still be needed).
Russell Nelson-3 wrote:
On Wed, Nov 30, 2011 at 10:20 PM, Rob Lanphier ro...@wikimedia.org
wrote:
On Wed, Nov 30, 2011 at 5:51 PM, Aaron Schulz aschulz4...@gmail.com
wrote:
I'm starting to finish the initial coding
I'm starting to finish the initial coding for the FileBackend branch (see
https://svn.wikimedia.org/viewvc/mediawiki/branches/FileBackend). I still
have to get to thumb.php and img_auth.php though. Simple testing of uploads,
re-uploads, moves, deletes, restores, and such are working on my local
It was intended to replace selective undeletion as part of a large deletion
schema overhaul, which fell through, so there wasn't as much motivation to
get it in use. Also, I recall some enwiki admins saying that it still
needed more granularity for some hairy merge scenarios.
Brion Vibber
Some use cases:
* Dump validation (per Ariel)
* Revert detection
* Collapsing reversions in history to hide clutter
* Replacing/augmenting baseRevId hacks in FlaggedRevs
Domas Mituzas wrote:
* When reverting, do a select count(*) where md5=? and then do something
more advanced when
at 2:18 PM, Aaron Schulz aschulz4...@gmail.com
wrote:
Yay for log_params. I was thinking JSON would be appropriate here, so
I'm
glat to see that.
Even though data in those fields is small enough, can
serialize()/unserialize() be used instead? It's faster and doesn't
require
the mess
Yay for log_params. I was thinking JSON would be appropriate here, so I'm
glat to see that.
I'll toss these revs onto my review queue.
Bugzilla from niklas.laxst...@gmail.com wrote:
I just commited many changes to logging code. There is more to come,
but I think this is suitable place to
It would be nice to have standard functions that supports storing associative
arrays in log_params rather than fragile ordered lists. I ended up hacking
up a quick function in FlaggedRevs for this. Newer log types could make use
of this and existing ones could if they had some b/c code.
/wikitech-l
--
View this message in context:
http://old.nabble.com/Aaron-Schulz-now-full-time-at-Wikimedia-Foundation-tp32375064p32376976.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.
___
Wikitech-l mailing list
Wikitech
Also, with some coordination, branches could be merged at a time when:
* Reviewers will looking at it and testing it *right* after the merge (in
addition to any branch review)
* The author is around to make fixes as it gets final review
People should try to be available after any large changes
Congrats! Special Ops sounds cool by the way ;)
CT Woo wrote:
All,
Please join me to welcome Jeff Green to Wikimedia Foundation.
Jeff is taking up the Special Ops position in the Tech Ops department
where
one of his responsibilities is to keep our Fundraising infrastructure
secured,
I second the idea of the static Linker class. It's far better than the
subclass system. Skin modification of links should focus on CSS anyway,
rather than trying to overload link generating code.
Daniel Friesen-4 wrote:
On 11-04-04 02:40 PM, Platonides wrote:
I like it. Specially the Linker
Two things:
(i) I'd really hope that subclassing would be very rare here. I don't think
this will be much of an issue though.
(ii) Also, it would be nice if developers could all have hiphop running on
their test wikis, so that code that's broken on hiphop isn't committed in
ignorance. The only
It would be nice if FlaggedRevs/archives/patch-fi_img_timestamp.sql was run
on the wikis created before the patch.
Tim Starling-2 wrote:
If there are any schema changes you want done on Wikimedia in the next
batch, let me know. I have the following patch files queued up, to be
run in the
That's much better than using editbin :)
I added the following:
IfModule mpm_winnt_module
ThreadStackSize 8388608
/IfModule
The stack overflow issue is gone now. This should be in the MW.org apache
config docs and elsewhere (especially for 1.17, since nothing beforehand was
running into this
In JavaScriptDistiller, inside of the createParser() function, we have:
$parser-add( '/\\/\\*(.|[\\r\\n])*?\\*\\//' );
It took me hours to track down that this was causing Apache 2.1.11 to crash
on nearly any page view on my test wiki. This happened when a large JS
bundle is loaded, such as:
+1 to this. Lets focus more on the changes and ideas and less on the authors.
IMO, when I keep seeing things like he did/he changed and [so and so]
made it so that instead of things like the change made it throws up red
flags. Just discuss the change, and mention the person minimally (to help
Is this your triumphant return? :)
Ashar Voultoiz-4 wrote:
Hello,
Just a quick message to reintroduce myself to people who might be
wondering who is this new committer.
I am from France and discovered Wikipedia in 2002. Getting interested
in bug fixing, I have eventually been
Clever :)
-Aaron Schulz
Date: Tue, 4 Aug 2009 09:48:28 +0100
From: dger...@gmail.com
To: wikitech-l@lists.wikimedia.org
Subject: [Wikitech-l] Cruel and unusual abuse of DNS to make a command-line
Wikipedia search
http://lifehacker.com/5329014/search-wikipedia-from-the-command-line
Some issue were waiting on the big scap, which has since happened thanks to Tim
(along with the syncing of fixes).
-Aaron Schulz
Date: Sat, 20 Jun 2009 08:47:03 +1000
From: thepmacco...@gmail.com
To: wikitech-l@lists.wikimedia.org
CC: foundatio...@lists.wikimedia.org
Subject: Re
Also note that the 'patrolled revisions' aspect is particularly messy to
implement. Some changes have been made but more are still needed to the
extension. I'm tempted to split off patrolling into its own code other
tables.
-Aaron
--
From: Brion
78 matches
Mail list logo