[Wikitech-l] Action API: deprecation of "extension" tag source in ApiQueryTags

2024-01-17 Thread Aaron Schulz
Hello,
In the list=tag API query module, the tag source type named "extension" is
being renamed to "software" [1]. As of MediaWiki 1.42, "extension" still
appears alongside "software" in the tag source lists but is deprecated [2].
In future versions of MediaWiki, the "extension" entries will no longer
appear.

The use of "extension" is misleading since it does not exclusively refer to
tags defined by MediaWiki extensions (via the onListDefinedTags hook), but
also those defined in MediaWiki core (via
ChangeTagsStore::DEFINED_SOFTWARE_TAGS). The distinction isn't really
useful to clients anyway.

[1] https://phabricator.wikimedia.org/T247552
[2]
https://en.wikipedia.org/w/api.php?action=query=tags=source=max
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: [Wikitech-l] Declaring methods final in classes

2019-08-28 Thread Aaron Schulz
Well, changing something in core and breaking a production extenison doing
something silly can't be waived away with "it's the extension's problem" ;)

I mostly use "final" to enforce a delegation pattern, where only certain
key bits of functionality should be filled in by subclasses. It mostly
comes out of years and years of bad experience with core and extension code
subclassing things in annoying ways that inevitably have to be cleaned up
as a side-task to getting some other feature/refactoring patch to pass CI.
It's a clear way to both document and enforce subclass implementation
points. The only reason not to use it is for tests, and I have removed
"final" before (placed in BagOStuff) when I couldn't come up with another
workaround. Interfaces will not work well for protected methods that need
to be overriden and called by an abstract base class.

If no PHP/PHPUnit fix is coming soon, as a practical matter, I'm sure some
other alternative documentation and naming style pattern could be
standardized so that people actually follow it and don't make annoying and
fragile dependencies.

On Wed, Aug 28, 2019 at 12:30 AM Aryeh Gregor  wrote:

> On Tue, Aug 27, 2019 at 11:53 PM Daimona  wrote:
> > Personally, I don't like these limitations in PHPUnit and the like. IMHO,
> > they should never be a reason for changing good code.
>
> I don't like these limitations either, but testing is an integral part
> of development, and we need to code in a way that facilitates testing.
> In each case we need to make a cost-benefit analysis about what's best
> for the project. The question is whether there's any benefit to using
> final that outweighs the cost to testability.
>
> > And sometimes, methods have to be final.
>
> Why do methods ever "have" to be final? Someone who installs an
> extension accepts that they get whatever behavior changes the
> extension makes. If the extension does something we don't want it to,
> it will either work or not, but that's the extension's problem.
>
> This is exactly the question: why do we ever want methods to be final?
> Is there actually any benefit that outweighs the problems for testing?
>
> > Anyway, some time ago I came across [1], which allows mocking final
> methods
> > and classes. IIRC, it does that by removing the `final` keywords from the
> > tokenized PHP code. I don't know how well it works, nor if it could
> degrade
> > performance, but if it doesn't we could bring it in via composer.
>
> That would be a nice solution if it works well. If someone wants to
> volunteer to try to get it working, then we won't need to have this
> discussion. But until someone does, the question remains.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
-Aaron
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] BagOStuff::modifySimpleRelayEvent removed from MediaWiki 1.33

2019-03-08 Thread Aaron Schulz
The modifySimpleRelayEvent() method was narrowly intended (and only usable)
for use with a WANObjectCache that uses EventRelayer. The later dependency
has since been removed from WANObjectCache. It was part of an experimental
approach for relaying object cache purges accross WMF datacenters, which
was abandoned in favor of mcrouter/dynomite.

-- 
-Aaron
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Need review on ContentHandler implementation, Newsletter extension

2016-10-17 Thread Aaron Schulz
You can add me to the patch. I might be able to get around to looking at
it this week.

On Sat, Oct 15, 2016 at 11:47 AM, Tony Thomas <01tonytho...@gmail.com>
wrote:

> PIng again on this one, as we need review on
> https://gerrit.wikimedia.org/r/#/c/304692, which is 2/3 of the shift to
> ContentHandler patchsets.
>
> Thanks to Legoktm, for reviewing the first one though. Its been a while
> since the changes were posted (Aug 14), and super large
> https://gerrit.wikimedia.org/r/#/c/295670/ was abandoned and split into
> three:
>
> [x] https://gerrit.wikimedia.org/r/#/c/303984
> [ ] https://gerrit.wikimedia.org/r/#/c/304692/ and
> [ ] https://gerrit.wikimedia.org/r/#/c/309849/
>
> The tracking phab task is https://phabricator.wikimedia.org/T138462, which
> was a GSoC 2015 project (its been almost 1 year)!
>
> Thanks,
> Tony Thomas 
> Home  | Blog 
> |
> ThinkFOSS 
>
>
> On Sat, Jul 16, 2016 at 6:08 PM, Tony Thomas <01tonytho...@gmail.com>
> wrote:
>
> > Hello all,
> >
> > We have a patch https://gerrit.wikimedia.org/r/#/c/295670/, which had
> its
> > last review almost 28 days back, and is a major blocker for the
> deployment
> > of Newsletter extension in production. The shift is tracked at
> T138462[1].
> >
> > The patch is bit lengthy, and enable the extension to use ContentHandler,
> > which help us use a lot of in-wiki features. I had pulled the change to
> the
> > labs wiki at http://newsletter-test.wmflabs.org/, which is broken as of
> > now, tracked in T138686[2].
> >
> > It would be great if you devs can take a look at both the labs instance
> > and the contenthandler change.
> >
> > [1] https://phabricator.wikimedia.org/T138462
> > [1] https://phabricator.wikimedia.org/T138686
> >
> > Thanks,
> > Tony Thomas 
> > Home  | Blog  |
> > ThinkFOSS 
> >
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
-Aaron
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] New DB_REPLICA constant; DB_SLAVE deprecated

2016-09-06 Thread Aaron Schulz
As of 950cf6016c, the mediawiki/core repo was updated to use DB_REPLICA
instead of DB_SLAVE, with the old constant left as an alias. This is part
of a string of commits that cleaned up the mixed use of "replica" and
"slave" by sticking to the former. Extensions have not been mass
converted. Please use the new constant in any new code.

The word "replica" is a bit more indicative of a broader range of DB
setups*, is used by a range of large companies**, and is more neutral in
connotations.

Drupal and Django made similar updates (even replacing the word "master"):
* https://www.drupal.org/node/2275877
* https://github.com/django/django/pull/2692/files &
https://github.com/django/django/commit/beec05686ccc3bee8461f9a5a02c607a02352ae1

I don't plan on doing anything to DB_MASTER, since it seems fine by itself,
like "master copy", "master tape" or "master key". This is analogous to a
master RDBMs database. Even multi-master RDBMs systems tend to have a
stronger consistency than classic RDBMs slave servers, and present
themselves as one logical "master" or "authoritative" copy. Even in it's
personified form, a "master" database can readily be thought of as
analogous to "controller",  "governer", "ruler", lead "officer", or such.**

* clusters using two-phase commit, galera using certification-based
replication, multi-master circular replication, ect...
**
https://en.wikipedia.org/wiki/Master/slave_(technology)#Appropriateness_of_usage
***
http://www.merriam-webster.com/dictionary/master?utm_campaign=sd_medium=serp_source=jsonld

-- 
-Aaron
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Failed jobs will never run again

2015-07-08 Thread Aaron Schulz
Have you tried setting something like:

$wgJobTypeConf['default']['claimTTL'] = 3600;

Jobs are not normally retried by default, only archived and deleted...maybe
that default should change.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Failed-jobs-will-never-run-again-tp5049689p5049916.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Recent growth of wikidatawiki.imagelinks table size

2014-12-13 Thread Aaron Schulz
Maybe pages using some of the properties at
https://commons.wikimedia.org/wiki/Commons:Wikidata have  links that are
tracking in the parser output from rendering wikidata pages. If so, then
they'd go in the imagelinks table and globalimagelinks too. This could be
useful for checking usage before deleting/moving commons files.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Recent-growth-of-wikidatawiki-imagelinks-table-size-tp5041113p5041162.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Efficient caching of large data sets for wikidata

2014-11-29 Thread Aaron Schulz
A few things to note:

* APC is not LRU, it just detects expired items on get() and clears
everything when full (https://groups.drupal.org/node/397938)
* APC has a low max keys config on production, so using key-per-item would
require that to change
* Implementing LRU groups for BagOStuff would require heavy CAS use and
would definitely be bad over the wire (and not great locally either)

Just how high is the label traffic/queries? Do we profile this?

If it is super high, I'd suggest the following as a possibility:
a) Install a tiny redis instance on each app server.
b) Have a sorted set in redis containing (label key = score) and individual
redis keys for label strings (with label keys). Label keys would be like
P33-en. The sorted set and string values would use a common key prefix in
redis. The sorted-set key would mention the max size.
c) Cache get() method would use the normal redis GET method. Once every 10
times it could send a Lua command to bump the label key's score in the
sorted-set (ZSCORE) to that of the highest score +1 (find via ZRANGE key -1
-1 WITHSCORES).
d) Cache set() method would be a no-op except once every 10 times. When it
does anything, it would send a Lua command to remove the lowest scored key
if there is no room (ZREMRANGEBYRANK key 0 1) and in any case add the label
key with a score equal to the highest score + 1. It would also add the value
in the separate key for that value with a TTL (likewise deleting it on
eviction). The sorted-set TTL would be set to max(current TTL, new value
TTL).
e) Cache misses would fetch from the DB rather than text store

If high traffic causes flooding, the 10 number can be tweaked (or
eliminated) or the highest rank + 1 logic could be tweaked to insert new
labels with a score that's better than only 3/8 of the stuff rather than all
of it (borrowing from MySQL). The above method just uses O(logN) redis
stuff.

Such a thing could probably be useful for at least a few more use cases I'd
bet.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Efficient-caching-of-large-data-sets-for-wikidata-tp5040022p5040050.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Making a plain MW core git clone not be installable

2014-06-12 Thread Aaron Schulz
It seems worth looking into PEAR mail in my opinion. There something to be
said for a certain minimalism in libraries.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Making-a-plain-MW-core-git-clone-not-be-installable-tp5029976p5030123.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-06-06 Thread Aaron Schulz
I suppose that naming scheme is reasonable.

$contentsRevId sounds awkward, maybe $sourceRevId or $originRevId is better.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5029674.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-05-30 Thread Aaron Schulz
FlaggedRevs uses the NewRevisionFromEditComplete hook. Grepping for that, I
see reasonable values set in the callers at a quick glance. This cover
various null edit scenarios too. The $baseRevId in WikiPage is just one of
the cases of that value passed to the hook, and is fine there (being mostly
false). false indeed means not determined and that behavior is needed
for the hook values. The values given in that hook variable make sense and
are more or less consistent.

As I said before, if the NewRevisionFromEditComplete hook is given the same
base revision ID values for all cases, then I don't care to much what
happens to the $baseRevId value semantics in doEditContent(). As long as
everything is changed to keep that part consistent then it won't effect
anything. However, just naively change the $baseRevId values for the
non-false cases will break the extension using it.

As as side note, FlaggedRevs doesn't just end up using $oldid. It only uses
that as the last resort after picking other values in difference scenarios
it detects.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5029028.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-05-29 Thread Aaron Schulz
Yes it was for auto-reviewing new revisions. New revisions are seen as a
combination of (base revision, changes). If the base revision was reviewed
and the user is trusted, then so is the new revision. MW core had the
obvious cases of rollback and null edits, which are (base revision, no
changes). Their is a lot more base revision detection in FlaggedRevs for
the remaining cases, some less obvious (user supplied baseRevId, X-top edit
undo, fall back to prior edit).

If baseRevId is always set to the revision the user started from it would
cause problems for that extension for the cases where it was previously
false.

It would indeed be useful to have a casRevId value that was the current
revision at the time of editing just for CAS style conflict detection.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5028902.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Config class and 1.23

2014-04-18 Thread Aaron Schulz
I'd suggest a revert from the branch, yes.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Config-class-and-1-23-tp5026223p5026236.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Keeping Code-Review votes across trivial changes of patch sets?

2014-04-11 Thread Aaron Schulz
What if someone -1's due to something in the summary? It's odd that fixing it
with a new commit would still show -1 on the reviewer's dashboard. I'm fine
with it for automatic rebases though.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Keeping-Code-Review-votes-across-trivial-changes-of-patch-sets-tp5025768p5025774.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Watchlist and RC refactor, and why you will like it

2014-01-09 Thread Aaron Schulz
I'd agree with the general statement on inheritance (which can have weird
coupling and diamonds of doom) and hooks (which can lead to hard-to-specify
behavior and tangle). I'm not familiar with the main problem with
SpecialPage having been articulated though.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Watchlist-and-RC-refactor-and-why-you-will-like-it-tp5019664p5019751.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Is WikiPage-doEdit dangerous in a parser tag callback?

2013-10-11 Thread Aaron Schulz
The doEdit() call needs to parse and reuses $wgParser, which is already in
use so it probably breaks the state of it. Maybe you could use a
DeferredUpdate to actually to the edits, or do them via an api.php request,
or stash $wgParser, replace it with a new one before doing the edit and then
swap it back.

In any case doing edits on tag parse could be kind of slow (e.g. someone
does a page preview with hundreds of tags in it). One might want to limit
that somehow.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Is-WikiPage-doEdit-dangerous-in-a-parser-tag-callback-tp5014848p5014851.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Method of Testing DB Queries

2013-10-09 Thread Aaron Schulz
They often give the same results on smallish wikis, but I wouldn't carry that
over to test wikis unless lots of content and users, logging, other table
data was somehow imported in. For example a tiny user table might make mysql
start INNER JOINs with that table in queries where it would never do that in
production. In my experience development test wikis are often useless for
estimating what query plan will happen in production.

A smallish wiki with 10ks of pages and the full history and the table data
(not just revision/page/*links stuff from dumps) would probably be useful.
I'm not sure where the threshold roughly starts though.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Method-of-Testing-DB-Queries-tp5014676p5014679.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-03 Thread Aaron Schulz
I'm not dead set against it, but there are some problems I see with it:

a) It's not well maintained nor documented as people don't really consider
it when changing anything. A concerned volunteer could manage this probably.
For example, dealing with HTTPs could be documented better and the code
could have some failsafe logic around it (just like with $wgShowIPinHeader).
b) It requires additional code paths and complexity everywhere there is
already CDN (squid/varnish) purge logic. Bugs fixed in one may not carry
over to the other. Because of (b), this makes it more vulnerable to bit rot.
c) Files are not LRU and don't even have an expiry mechanism. One could make
a script and put it on a cron I guess (I rediscovered the existence of
PruneFileCache.php, which I forgot I wrote). If one could do that, they
probably also have the rights to install varnish/squid. Hacking around the
lack of LRU requires MediaWiki to try to bound the worst case number of
cache entries; page cache is only the current version and the resource
loader cache uses a bunch of hit count and IP range uniqueness checks to
determine if a load.php cluster of modules is worth caching the response for
(you don't want to cache any combination of modules that happens to hit the
server, only often hits ones by different sources).
d) It can only use filesystems and not object stores or anything else. This
means you need to either only have one server, or use NFS, or if you want to
be exotic use fuse with some DOS, or use cephfs/gluster (though if you can
do all that you may as well use varnish/squid). I'd imagine people would
just use NFS, which may do fine for lots of small to moderate traffic
installs. Still, I'd rather someone set up a CDN rather than install NFS
(either one takes a little work). People would use CDN if it was made easier
to do I'd bet.
e) I'd rather considering investing time in documentation, packaging, and
core changes to make CDN as easy to set up as possible (for people with VMs
or their own physical boxes). Bugs found by third parties and WMF could be
fixed and both sides could benefit from it since common code paths would be
used. Encouraging squid/varnish usage fits nicely with the idea of
encouraging other open source projects and libraries. Also, using tools
heavily designed and optimized for certain usage is better than everyone
inventing their own little hacky versions that do the same thing (e.g. file
cache instead of a proper CDN).
f) Time spent keeping up hacks to do the work of CDNs to make MediaWiki
faster could be spent on actually make origin requests to MediaWiki faster
and making responses more cache friendly (e.g. ESI and such). For example,
if good ESI support was added, would file cache just lag behind and not be
able to do something similar? One *could* do an analogous thing with file
cache reconstructing pages from file fragments...but that would seem like a
waste of time and code if we can just make it easy to use a CDN.

In any case, I would not want to see file cache removed until CDN support
was evaluated, documented, and cleaned up, so people have an easy
alternative in it's place. For example, if a bunch of confusing vcls are
needed to use varnish, then few will go through the effort.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/File-cache-HTTPS-question-tp5014197p5014448.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File cache + HTTPS question

2013-10-01 Thread Aaron Schulz
As the last person to maintain that code, I tend to agree with this.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/File-cache-HTTPS-question-tp5014197p5014229.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What are DeferredUpdates good for?

2013-09-18 Thread Aaron Schulz
Adding a method to do that to DeferredUpdates would nice, assuming it would
batch the jobs by type when pushing them.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/What-are-DeferredUpdates-good-for-tp5013179p5013510.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What are DeferredUpdates good for?

2013-09-17 Thread Aaron Schulz
Until what? A timestamp? That would be more complex and prone to over/under
guessing the right delay (you don't know how long it will take to commit). I
think deferred updates are much simpler as they will just happen when the
request is nearly done, however long that takes.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/What-are-DeferredUpdates-good-for-tp5013179p5013398.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What are DeferredUpdates good for?

2013-09-16 Thread Aaron Schulz
Speaking of the job queue, deferred updates are useful for adding jobs that
depend on data that was not yet committed. This can easily be an issue since
we normally wrap web requests in one DB transaction and commit at the very
end. If you push() some jobs before the commit, and they get run before
commit (which might randomly happen from time to time), and they depend on
some of those DB changes, then the jobs might break. Using deferred updates
works around this, as do the transaction callback methods in the Database
classes (if you know exactly what DBs things depend on).



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/What-are-DeferredUpdates-good-for-tp5013179p5013294.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] PHP 5.4 (we wish)

2013-06-09 Thread Aaron Schulz
Closure changes and traits would indeed be really nice.

Short array syntax is a plus too.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/PHP-5-4-we-wish-tp5006788p5006809.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Architecture Guidelines: Writing Testable Code

2013-06-07 Thread Aaron Schulz
I generally agree with 2-8, and 10. I think points 2 and 10 are pretty
subjective and must be applied very pragmatically. 



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Architecture-Guidelines-Writing-Testable-Code-tp5006129p5006712.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New git-review lets you configure 'origin' as the gerrit remote

2013-06-06 Thread Aaron Schulz
I agree it would be nice for our repos (or git-review setup steps) would have
sane defaults instead of ones almost everyone will want to change to not be
annoyed.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/New-git-review-lets-you-configure-origin-as-the-gerrit-remote-tp5006182p5006586.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Can we kill $wgPasswordSalt

2013-05-29 Thread Aaron Schulz
Sounds fine by me.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Can-we-kill-wgPasswordSalt-tp5005998p5006001.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Information on MW and Redis?

2013-05-24 Thread Aaron Schulz
To use redis as a cache you can have something like:

// requires phpredis extension for PHP
$wgObjectCaches['pecl-redis'] = array(
'class'   = 'RedisBagOStuff',
'servers' = array( '127.0.0.1:6379' ),
);
$wgMainCacheType = 'pecl-redis';

This would also require that the redis server would have allkeys-lru for its
eviction policy in redis.conf.

To use redis for a jobqueue, one can have something like:

// requires phpredis extension for PHP
$wgJobTypeConf['default'] = array(
'class'  = 'JobQueueRedis',
'redisServer'= '127.0.0.1:6379',
'redisConfig'= array(),
'claimTTL'   = 3600
);

This works best if the redis server uses rdb snapshots and/or
append-only-file logging in redis.conf so that jobs are lost with power
outages or restarts.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Information-on-MW-and-Redis-tp5005659p5005660.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Information on MW and Redis?

2013-05-24 Thread Aaron Schulz
2.2.2 of the extensions works for me. I downloaded it from source and
compiled it.

The redis server itself will need to be 2.6 or higher for the job queue.

Looking around, I forgot to mention that JobQueueRedis was actually removed
from 1.21 (though it's in master and will be in 1.22).



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Information-on-MW-and-Redis-tp5005659p5005664.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Information on MW and Redis?

2013-05-24 Thread Aaron Schulz
Note that if you already use memcached for the main cache, there isn't really
any reason to switch to redis unless you need replication or persistence.

Anyway, to use it for sessions, if you had $wgSessionCacheType explicitly
set to something, then you'd need to change that too (like to 'pecl-redis').
In any case, it doesn't hurt to be explicit. This all assumes that
$wgSessionsInObjectCache = true as well.



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Information-on-MW-and-Redis-tp5005659p5005665.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Information on MW and Redis?

2013-05-24 Thread Aaron Schulz
Indeed, the queue cannot use memcached. Redis will trivialize the time spent
on actually queue operations, which could help if that is a bottleneck for
job runners. If the actual jobs themselves are slow, of course it won't help
too much.

Have you already tried setting the job run rate to 0 and is a background
script instead?



--
View this message in context: 
http://wikimedia.7.x6.nabble.com/Information-on-MW-and-Redis-tp5005659p5005693.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Flagged revs and Lua modules

2013-03-21 Thread Aaron Schulz
Sounds like a site config issue. All wikis that have NS_TEMPLATE in
$wgFlaggedRevsNamespaces should also have NS_MODULE in there.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Flagged-revs-and-Lua-modules-tp4999685p497.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaHandler Stream Headers

2013-03-04 Thread Aaron Schulz
Sounds like https://gerrit.wikimedia.org/r/#/c/41932/



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/MediaHandler-Stream-Headers-tp4998162p4998308.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] NullLockManager and the math extension

2013-02-08 Thread Aaron Schulz
Yes a654a6e79adc8f4730bb69f79e0b6a960d7d3cbe should be fixed. It should add
the nullLockManager back.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/NullLockManager-and-the-math-extension-tp4995536p4995767.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] NullLockManager and the math extension

2013-02-06 Thread Aaron Schulz
nullLockManager is defined in Setup.php.The code:
LockManagerGroup::singleton()-get( 'nullLockManager' );
... works fine in eval.php and is used in production.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/NullLockManager-and-the-math-extension-tp4995536p4995539.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Parsoid roadmap

2013-01-29 Thread Aaron Schulz
+1

I think everything into Q3 looks like a good way to proceed forward. There
might be an interesting division of labor on getting these things done
(parsiod job handling, Cite extension rewrite, API batching). I'd be willing
to help in areas I'd be useful in. I think this is ambitious, but the steps
laid out look manageable by themselves. We will see how the target dates
collide with reality, which may also depend on the level of interest.

I'd really like to see a reduction of CPU spent on refreshLinks jobs, so
anything to help in that area is welcome. We currently rely on throwing more
processes and hardware at the problem and using de-duplication to at least
stop jobs from piling up (such as when heavily used templates keep getting
edited before the previous jobs finish). De-duplication has it's own costs,
and will make sense to move the queue of the main clusters. Managing these
jobs is getting more difficult. In fact, it's the editing of a few templates
that can account for a majority of the queue, where tens of thousands of
entire pages are parsed because of some modest template change. I like the
idea of storing dependency information in (or alongside) the HTML as
metadata and using it to recompute only affected parts of the DOM. 

There is certainly discussion to be had about the cleanest way to handle the
trade-offs of when to store updated HTML for a revision (when a
template/file changes or a magic word or DPL list should be re-calculated).
It probably will not make sense for old revisions of pages. If we are
storing new versions of HTML, it may make sense to purge the old ones from
external storage if updates are frequent, though that interface has no
deletion support and that is slightly against the philosophy of the external
storage classes. It's probably not a big deal to change it though. I've also
been told that the HTML tends to compress well, so we should not be looking
at on order-of-magnitude text storage requirement increase (though maybe 4X
or so from some quick tests). I'd like to see some documented statistics on
this though, with samples.

I think the Visual Editor + HTML only method for third parties is
interesting and could probably make use of ContentHandler well. I'm curious
about the exact nature of HTML validation needed server-side for this setup,
but from what I understand it would not be too complicated and the metadata
could be handled in a way that does not require blind trust of the client.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/RFC-Parsoid-roadmap-tp4994503p4994870.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Limiting storage/generation of thumbnails without loss of functionality

2013-01-23 Thread Aaron Schulz
I'd strongly suggest considering this kind of approach.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Limiting-storage-generation-of-thumbnails-without-loss-of-functionality-tp4994447p4994493.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] some issues with missing Files

2013-01-16 Thread Aaron Schulz
Do you want the files or not?

The first post sounds like you don't, in that case you'd need to truncate
the image/oldimage/archive tables. This will remove all registration of the
files. Clearing memcached (or whatever cache you use) might be needed to.

You can copy the files over with copyFileBackend.php from the old backend to
the new one. The src backend would be the default upload backend name
(just dump $wgFileBackends in eval.php to find it) unless you configured it
otherwise and the dst backend would have to be added to $wgFileBackends
and point to the new server somehow (such as via NFS or removable media).



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/some-issues-with-missing-Files-tp4992140p4993969.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
Some notes (copied from private email):
* It only creates the lock file the first time.
* The functions with different bits are not just the same thing with more
bits. Trying to abstract more just made it more confusing.
* The point is to also have something with better properties than uniqid.
Also I ran large for loops calling those functions and timed it on my laptop
back when I was working on that and found it reasonable (if you needed to
insert faster you'd probably have DB overload anyway).
* hostid seems pretty common and is on the random wmf servers I tested a
while back. If there is some optimization there for third parties that don't
have it, of course it would be welcomed.

At any rate, I changed the revert summary though Timo beat me to actually
merging the revert. My main issue is the authorship breakage and the fact
that the split of change wasn't +2'd by a different person. I was  also
later asked to add tests (36816), which should ideally would have been
required in the first patch rather than as a second one; not a big deal but
it's a plus to consolidating the changes after a revert.

That said, the change was actually a class split off verbatim from
https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for ages), so
it's not like the change was in gerrit for a split-second and then merged. I
think the process should have been better here though it's not a huge deal
as it may seem at first glance.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
I share some blame for the existence of this thread. I spotted the git author
issue after that commit was merged and was too lazy to revert and fix it. I
personally tend to dislike reverting stuff way more than I should (like a
prior schema change that was merged without the site being updated). I
should have just reverted that immediately and left a new patch waiting for
+2.

Patches by person A that just split out a class or function made by person B
should still be looked at by someone other than person A. I think it's a
border case, but leaning on the side of caution is the best bet. It sucks to
have the code break due to something that was accidentally not copied. I
don't think it's worth reverting something like that just for being
self-merged (which is why didn't), but it's good practice to avoid. If a
string of basic follow-ups are needed and people complain, it might be worth
reverting though (like what happened here). We can always add patches back
into master after giving it a second look, so reverting isn't always a huge
deal and need not be stigmatizing. I need to get used to the revert button
more; lesson learned. :)



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990923.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
RDBStore is shelfed as a reference for now. The idea was to partition sql
table across multiple DB servers using a consistent hash of some column.
There no longer would be the convenience of autoincrement columns so UIDs
are a way to make unique ids without a central table or counter.

In some cases, like when the primary key is the uid column, duplicate
detection can be enforce by the DB since duplicate values would map to the
same partition table and that table would have a unique index, causing a
duplicate key error. This could allow for slightly smaller uids to be used
with the comfort of knowing that in the unlikely event of a rare collision,
it will be detected. This is why it had several uid functions. It might be
nice to add a standard UUID1 and UUID4 function, though they were not useful
for RDB store for B-TREE reasons.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990931.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Refactor of mediawiki/extensions/ArticleFeedbackv5 backend

2012-12-05 Thread Aaron Schulz
I'm seconding that recommendation to be clear. More specifically, I'd suggest
that the AFT classes have two new protected methods:
* getSlaveDB() - wrapper for wfGetLBFactory()-getExternalLB(
$wgArticleFeedBackCluster )-getConnection( DB_SLAVE, array(), $wikiId )
* getMasterDB() - wrapper for wfGetLBFactory()-getExternalLB(
$wgArticleFeedBackCluster )-getConnection( DB_MASTER, array(), $wikiId )
The wrappers could also handle the case where the cluster is the usual wiki
cluster as well (e.g. good old wfGetDB()).

You could then swap out the current wfGetDB() calls with these methods. It
might be easiest to start with the current AFT, do this, and fix up the
excessive queries write queries rather that try to convert the AFT5 code
that used sharding. The name of the cluster would be an AFT configuration
variable (e.g. $wgArticleFeedBackCluster = 'external-aft' ).

This works by adding the new  'external-aft' cluster to the 'externalLoads'
portion of the load balancer configuration. It may make sense to give the
cluster a non-AFT specific name though (like 'external-1'), since I assume
other extensions would use it. Maybe the clusters could be named after
philosophers to be more interesting...

One could instead use wfGetDB( index, array(), 'extension-aft' ), though
this would be a bit hack since:
a) A wiki ID would be used as an external cluster name where there is no
wiki
b) The actual wiki IDs would have to go into table names or a column



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Refactor-of-mediawiki-extensions-ArticleFeedbackv5-backend-tp4990937p4990952.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Commits that make schema changes

2012-11-22 Thread Aaron Schulz
In order to make commits that change the schema as discoverable is possible,
I'd propose the following guidelines:
* Update RELEASE-NOTES to briefly mention the change (this is not always
necessary for follow up changes within the same release, since people
upgrading don't care about intra-release changes).
* Include [Schema] near the beginning of the first line of the commit.
* Mention the names of the tables that the schema was changed for as a
bullet point in the commit summary.

Also, changes should always allow for a temporary rollback strategy when
people upgrade from one major MediaWiki version to the next. The schema for
version X+1 should work with the version X code.
Mostly, this just involves doing additions in version X+1 and removals in
version X+2 or higher. More specifically:
* New tables are fine
* New columns are fine *as long as* they have default values (also one
should check for code that does SELECT * and does for loops on all fields,
which should be avoided to begin with)
* New indexes are fine (note that adding a new column and a unique index on
it can cause problems even with DEFAULT NULL for some non-mysql DBMS)
* Table removal is fine if it already wasn't used in the previous MediaWiki
version and any data worth migrating is already first migrated in update.php
with a logged update
* Column removal is fine if it already wasn't used in the previous MediaWiki
version and any data worth migrating is already first migrated in update.php
with a logged update
* Index removal is fine if it already wasn't used in the previous MediaWiki
version (one could also remove index A and add index B which also handles
the queries that used A *provided* there are no FORCE INDEX A statements
used by MediaWiki)
* Column changes that just fix prior problems or just expand the range of
values are fine (e.g int = bigint, NOT NULL = NULL, varchar = blob), as
long as it works with the previous MediaWiki version
* Index changes that are fine as long as it works with the previous
MediaWiki version (e.g. making an index not used for sorting go from
(field_sha1) to (field_sha1(8)), or doing changing and index from (field_a)
to (field_a,field_b)).

The reason I say temporary rollback is that during such a rollback it
might be OK for certain problems to exists. For example, in the past the
data stuffed in page_restrictions was moved to a new page_restrictions
table. Of course if some one upgraded for a while, and then rolled back,
newly protected pages would magically become unprotected (until the upgrade
was re-attempted or an admin re-protected the pages). The degree to which
these problems are acceptable depends on:
a) The likelihood of a rollback being needed (larger with more complex
changes)
b) Whether a rollback would be likely to only last for a brief time for a
hotfix or would be likely to last a long time (more so with more complex
changes)
c) How annoying the problem would be

I'd say that those things should be measured on a case-by-case basis. In
some cases, version X+1 should write to both the old and new style locations
if the risk is too high.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Commits-that-make-schema-changes-tp4990111.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Flagged Reviews default by quality levels

2012-09-25 Thread Aaron Schulz
So have 2+ quality levels and sometimes want quality versions to be the
default over checked ones? I guess the closest thing to that would be to
restrict who can review/autoreview certain pages via Special:Stabilization.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Flagged-Reviews-default-by-quality-levels-tp4983993p4986082.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Can we kill DBO_TRX? It seems evil!

2012-09-25 Thread Aaron Schulz
I agree that begin()/commit() should do what they say (which they do now).
I'd like to have another construct that behaves like how those two used to
(back when there were immediate* functions). Callers would then have code
like:
$db-enterTransaction()
... atomic stuff ...
$db-exitTransaction()
This would use counters for nested begins (or perhaps SAVEPOINTs to deal
with rollback better...though that can cause RTT spam easily). If using
counters, it could be like begin()/finish() in
https://gerrit.wikimedia.org/r/#/c/16696/. The main advantage of doing this
would be that in cli mode (which defaults to using autocommit), all the code
will still start transactions when needed. It would be nice to have the
consistency/robustness. 

In any case, echoing what Tim said, most code that has begin()/commit() does
so for performance reasons. In some cases, they can be changed to use
DeferredUpdates or $db-onTransactionIdle(). I had a few patches in gerrit
to this affect. Some things may not actually need begin/commit explicitly (I
got rid of this in some preferences code ages ago). Things like
WikiPage/LocalFile are examples of classes that would have a hard time not
using begin()/commit() as they do. Perhaps some code could be restructured
in some cases so that the calls at least match, meaning the splitting of
transactions would at least be more deliberate rather than accidental.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Can-we-kill-DBO-TRX-It-seems-evil-tp4986002p4986083.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Flagged Reviews default by quality levels

2012-08-28 Thread Aaron Schulz
When a page reaches X level of quality, that version becomes the default. 

When creating all the interface message for editing, viewing, and history,
this is definitely not easy to get right and keep simple for new users.

Anyway, to be clear, you can make the latest version the default for all
pages and manually make the latest reviewed version the default on a
per-page basis already. You just can't use quality versions as the default
version.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Flagged-Reviews-default-by-quality-levels-tp4983993p4984115.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Nested database transactions

2012-08-27 Thread Aaron Schulz
I'd have to see what you are doing to see if rollback is really needed.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Nested-database-transactions-tp4983700p4984075.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Flagged Reviews default by quality levels

2012-08-27 Thread Aaron Schulz
That text should be removed from the help page. Only the current or the
latest reviewed version can be the default. You cannot have pages use the
latest quality version as the default version. This would create a very
confusing interface that takes a mouth full to explain.

Also, it's hard enough to keep checked versions up to date, even hard for
quality ones. You don't won't to end up with people having their edits
take weeks (sometimes months) to show to readers because they haven't been
highly proofed yet.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Flagged-Reviews-default-by-quality-levels-tp4983993p4984076.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Using ORM patterns in core

2012-08-27 Thread Aaron Schulz
I was just looking through those classes again.

I think ORMRow is generally OK, since it's mostly a simple CRUD wrapper to
deal with some of the busy-work of making data access objects. I don't
really get the summary (updateSummaries/inSummaryMode) stuff though. I
guess the callers/subclasses do most of the summary work there or something.

I'm not really fond of ORMTable/ORMResult. A lot of functions are just
wrappers around DB calls that don't really abstract much. Also, singleton()
has one table instance per table, making foreign wiki access trickier than
with the regular LBFactory/DatabaseBase classes. This kind of stuff makes me
hesitant to use the classes (since ORMRow depends on the table class). I
guess what I'd really like out of those table classes is the support for
base API and Pager classes and the minimum needed for ORMRow (fields/types),
with foreign wiki support. I like the idea of getAPIParams() and an API base
class for making quick API classes.

The idea of some base classes for CRUD and API/Pager table listings is fine.
It can obviously avoid inconsistency among the DAOs. If these classes are
called ORM*, I guess that's OK too, as longs as they don't scope creep into
a complex system that coupled to everything and hard to change.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Using-ORM-patterns-in-core-tp4984036p4984074.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Nested database transactions

2012-08-23 Thread Aaron Schulz
The counter idea kind of reminds of what I have in
https://gerrit.wikimedia.org/r/#/c/16696/ .

I think the whole implicit commit issue is definitely pretty annoying and I
wish there was a reasonable way to address it without reasonable backwards
compatibility. rollback() is the hard case to deal with (I ended up not even
having it in that gerrit patch).

In general callers should avoid using rollback() for detecting problems or
race conditions. They should be checked up front. I put some comments about
this in the tiny IDBAccessObject interface a while ago. This avoids
complexity with what if someone rollback. It also avoid mysql undo segment
usage (though rollback is faster in PG).

SAVEPOINTs are useful if we really need to support people rollback
transactions *and* we need nested transaction support. I think they could be
made to work, but I'm not sold on their necessity for any use cases we have.




--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Nested-database-transactions-tp4983700p4983732.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MathJax scalable math rendering update for 1.19

2012-03-07 Thread Aaron Schulz
It would be uber sweet someday to kill the ocaml dependency.

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/MathJax-scalable-math-rendering-update-for-1-19-tp4556544p4557134.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Some questions about #switch

2012-02-01 Thread Aaron Schulz
By 99%, I meant 99% of users don't care (much or at all) about the things
these crazy templates tend to offer, they just want to read the article. I
remember Domas complaining about this last hackathon when...fixing...ocwiki.

Any article that uses slow templates that take forever to render is hard to
edit (since there is always a fresh parse). Making such pages hard to edit
or slow to view for users who are logged in and might have a few custom
preferences or otherwise have a generic cache miss is pretty disappointing.
It also is discouraging to new editors trying to change the page. Maybe it's
a result of the don't care about performance policy taken to extremes.

I've been pushing for Lua for months (rather than delay on JS vs Lua), and
I'm glad it's gotten steam again. Hopefully it will make these problems
moot. I'm also glad to see the increased focus on new editors in general
(from the features team).

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Some-questions-about-switch-tp4350750p4356926.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Some questions about #switch

2012-01-31 Thread Aaron Schulz
+1. SERIOUSLY. This always comes to mind when these issues come up. Editors
and readers (on cache miss) shouldn't have to suffer through this. We
shouldn't forget the 99% :)

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Some-questions-about-switch-tp4350750p4354000.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] should we keep $wgDeprecationWhitelist

2012-01-11 Thread Aaron Schulz
My inclination is to get rid of the feature for pretty much the reason you
mentioned.

In any case, is the point to avoid notices getting added to HTML for wikis
with certain extensions? I don't know why a production site would be set to
spew notices. Either error log settings or $wgDevelopmentWarnings can handle
this. If it's to avoid them in the log, again, $wgDevelopmentWarnings works.

IMO, the notices are most useful for core  extension developers testing
their code (who deliberately let all warnings get spewed out). If the dev
has time to work other extensions, and has the affected one enabled on the
same test wikis that have other extensions being worked on, *then* it might
be useful to hide certain warnings. However, it seems better to just delay
the deprecation in core a cycle. The use case for the new global just seems
too marginal and it seems pretty awkward and hacky.

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/should-we-keep-wgDeprecationWhitelist-tp3600656p3600788.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] should we keep $wgDeprecationWhitelist

2012-01-11 Thread Aaron Schulz
 So as I already mentioned, this would force developers from turning off the
whole thing. 

When I mentioned $wgDevelopmentWarnings, I was talking about production,
which is I why I had different paragraphs. The last one was about
development.

Anyway, one can bump the version in a wfDeprecated() call to delay the
cycle. AFAIK, the only problem in general with delaying it is the following:
a) function deprecated for version X, used by extensions A  B
b) author of A checks and complains because he can't handle the change this
cycle
c) deprecation bumped to version X+1 in response
d) author B checks and sees no warnings (since it was bumped)
e) next release comes
f) author of B checks and sees warnings and ALSO happens to be unable to
handle the change this cycle for whatever reason. It would have been nice if
he knew last cycle so maybe the time to fix could have been fit in by now.
g) we either leave author B with notices or delay the deprecation version a
second time

I still can't see a strong enough use case for the feature. It's not that it
doesn't exist, it just seems too marginal. As with chad, I don't think it's
revert war worthy.

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/should-we-keep-wgDeprecationWhitelist-tp3600656p3632649.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Rolling towards 1.19... scheduling a code freeze

2012-01-06 Thread Aaron Schulz
I'll still be doing some work on FileBackend like fixing jenkins, cleaning up
the streamFile() function, copying the swift backend to /trunk, and tying up
a fix loose ends and doing fixes next week. I think heavy stuff is pretty
much out of the way.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Rolling-towards-1-19-scheduling-a-code-freeze-tp2722998p3310386.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Title objects as value objects; split with WikiPage/Article family?

2012-01-05 Thread Aaron Schulz
I'd agree with reducing the state within Title and narrowing down it's
purpose to title sanitization/validation and such.

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Title-objects-as-value-objects-split-with-WikiPage-Article-family-tp3007667p3231236.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Diff colors - a disaster waiting to happen

2011-12-22 Thread Aaron Schulz
I've stated in CR that I don't think light light green has any real cultural
issue. It shouldn't be ruled out on those grounds. Red, on the other hand,
has stronger connotations. Part of it comes from the fact that GUIs vry
often have certain color standards, e.g.:
Blue: notice, fyi, please read me, more info
Red: stop, ERROR!, not allowed, invalid, failed
Green: OK, good, approved, success

...think of all the GUIs text and icons (red exclamations/hands, blue
question marks, green checks) you've seen and what the colors indicated. I
think medium greens and pretty much all reds (maybe you could get away with
faint pink) have hefty enough connotations that we should think twice about
using them for diffs.

So I wouldn't say the color-connotation issue is just BS, but I don't think
we should be overly cautious in trying to avoid colors that someone,
somewhere, might interpret in some way, somehow, as good/bad. I think light
green falls into the neutral enough bin. Just my two cents :)

--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Diff-colors-a-disaster-waiting-to-happen-tp2244041p2245507.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] FileBackend branch

2011-12-01 Thread Aaron Schulz

I forgot to mention a useful link, http://www.mediawiki.org/wiki/FileBackend.
Currently, just Tim and I have been using that page to record thoughts and
design decisions.


Aaron Schulz wrote:
 
 I'm starting to finish the initial coding for the FileBackend branch (see
 https://svn.wikimedia.org/viewvc/mediawiki/branches/FileBackend). I still
 have to get to thumb.php and img_auth.php though. Simple testing of
 uploads, re-uploads, moves, deletes, restores, and such are working on my
 local testwiki.
 
 At some point, I'll need to merge all this into /trunk of course. I'd
 appreciate any help in:
 * Downloading the code and playing around with it
 * Finding extensions that will need updating (or functionality that needs
 to be added to core for them)
 * Making suggestions and pointing out broken stuff (which I'm sure there
 is lots of)
 * Writing test cases (a LOT of these will be needed)
 * i18n improvements/docs for messages
 

-- 
View this message in context: 
http://old.nabble.com/FileBackend-branch-tp32887504p32897434.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] FileBackend branch

2011-12-01 Thread Aaron Schulz

I didn't copy them over so that I could avoid having them fall out of sync. I
don't have a super IDE, I just had the base class open in the editor to use
as a reference for the function documentation. I'd prefer not to copy them
anywhere, those I suppose when things settle down there is less risk of bit
rot if they are (maintenance would still be needed).

Russell Nelson-3 wrote:
 
 On Wed, Nov 30, 2011 at 10:20 PM, Rob Lanphier ro...@wikimedia.org
 wrote:
 
 On Wed, Nov 30, 2011 at 5:51 PM, Aaron Schulz aschulz4...@gmail.com
 wrote:
  I'm starting to finish the initial coding for the FileBackend branch
 (see
  https://svn.wikimedia.org/viewvc/mediawiki/branches/FileBackend). I
 still
  have to get to thumb.php and img_auth.php though. Simple testing of
 uploads,
  re-uploads, moves, deletes, restores, and such are working on my local
  testwiki.

 
 Did you test it using smtest.py ? It's a lot more persistent about testing
 because it's a lot less distractable than any hu  SQUIRREL!
 
 
 Hi folks,

 A few more details on this.  Aaron is trying to get some important
 refactoring work done in service to this project:
 http://www.mediawiki.org/wiki/SwiftMedia

 We'd like to land this code in trunk as soon as we can, shake out the
 inevitable bugs, and get this rolled out to the cluster as part of
 1.19.


 I'm guessing that Aaron is coding using an IDE that displays abstract
 class
 comments in line with the implementation, because the implementation class
 has no comments on any of the methods. For those of us using dumber
 editors, may/should I copy the comments over to the implementation class?
 I
 was planning to take FSFileBackend and copy it to SwiftBackend, and start
 changing the calls into Swift calls. So should I insert the comments into
 FSFileBackend before I do the copying, or after?
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/FileBackend-branch-tp32887504p32897514.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] FileBackend branch

2011-11-30 Thread Aaron Schulz

I'm starting to finish the initial coding for the FileBackend branch (see
https://svn.wikimedia.org/viewvc/mediawiki/branches/FileBackend). I still
have to get to thumb.php and img_auth.php though. Simple testing of uploads,
re-uploads, moves, deletes, restores, and such are working on my local
testwiki.

At some point, I'll need to merge all this into /trunk of course. I'd
appreciate any help in:
* Downloading the code and playing around with it
* Finding extensions that will need updating (or functionality that needs to
be added to core for them)
* Making suggestions and pointing out broken stuff (which I'm sure there is
lots of)
* Writing test cases (a LOT of these will be needed)
* i18n improvements/docs for messages
-- 
View this message in context: 
http://old.nabble.com/FileBackend-branch-tp32887504p32887504.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] History of mergehistory

2011-11-01 Thread Aaron Schulz

It was intended to replace selective undeletion as part of a large deletion
schema overhaul, which fell through, so there wasn't as much motivation to
get it in use.  Also, I recall some enwiki admins saying that it still
needed more granularity for some hairy merge scenarios.


Brion Vibber wrote:
 
 On Tue, Nov 1, 2011 at 11:28 AM, Niklas Laxström
 niklas.laxst...@gmail.comwrote:
 
 Why is mergehistory right not enabled by default? I only found
 thiscommit
 http://www.mediawiki.org/wiki/Special:Code/MediaWiki/27823which
 says disabled by default for now.

 
 IIRC Aaron created that special page, left disabled by default as an
 experimental feature, and I probably went eh that sounds scary and
 we
 never got back to making sure it was production-ready.
 
 Probably should get picked back up and polished off at some point...
 
 -- brion
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 

-- 
View this message in context: 
http://old.nabble.com/History-of-mergehistory-tp32760698p32761466.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adding MD5 / SHA1 column to revision table

2011-09-20 Thread Aaron Schulz

Some use cases:
* Dump validation (per Ariel)
* Revert detection
* Collapsing reversions in history to hide clutter
* Replacing/augmenting baseRevId hacks in FlaggedRevs


Domas Mituzas wrote:
 
 
 
 * When reverting, do a select count(*) where md5=? and then do something 
 more advanced when more than one match is found
 
 finally we don't need an index on it becomes we need an index on it,
 and storage efficiency becomes much more interesting (binary packing yay
 ;-)
 
 so, what are the use cases and how does one index for them? is it global
 hash check, per page? etc
 
 Domas
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Re%3A-Adding-MD5---SHA1-column-to-revision-table-tp32497172p32503704.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Re quests about log system

2011-09-09 Thread Aaron Schulz

I'd still prefer JSON for offline/non-PHP use. I'm not sure it's a huge deal
though.


Bugzilla from niklas.laxst...@gmail.com wrote:
 
 Big thank you for everyone who already looked and tested the code,
 especially to Aaron. I have fixed the few issues that have come up.
 
 Have we reached to an agreement to serialize the parameters instead of
 formatting them with JSON? I am going commit code that actually
 creates log entries using this new system, so I'd rather be sure we
 are comfortable with what we have chosen, to avoid unnecessary mix of
 different formats in the database.
 
   -Niklas
 
 On 8 September 2011 20:00, Niklas Laxström niklas.laxst...@gmail.com
 wrote:
 On 8 September 2011 17:57, Daniel Friesen li...@nadir-seen-fire.com
 wrote:
 On 11-09-08 04:25 AM, Niklas Laxström wrote:
 On 8 September 2011 13:36, Max Semenik maxsem.w...@gmail.com wrote:
 On Thu, Sep 8, 2011 at 2:18 PM, Aaron Schulz aschulz4...@gmail.com
 wrote:

 Yay for log_params. I was thinking JSON would be appropriate here, so
 I'm
 glat to see that.


 Even though data in those fields is small enough, can
 serialize()/unserialize() be used instead? It's faster and doesn't
 require
 the mess of ServicesJSON to work correctly.
 Do those cause actual problems or is it just matter of preference? In
 my opinion JSON is much better for anyone who wants to dig the logs
 without using PHP. Also, is (un)serialize guaranteed to be stable
 across PHP versions?

   -Niklas
 We already use serialize in HistoryBlob/Revision, the job queue,
 caching, file metadata, the localization cache, ...

 So if you add any new fields to the db you should really stick to
 (un)serialize.
 We're already using serialize everywhere and we even use binary storage
 which is troublesome for anyone trying to stare at the database with
 most phpmyadmin installs. People being minorly inconvenienced when
 reading the database raw is the last of our issues.
 If you want to argue the irrelevant minority that would be slightly
 inconvenienced reading the database raw I'll argue the irrelevant
 minority that would be slightly inconvenienced trying to do db queries
 to mw code externally and have to parse json which isn't as simple as
 (un)serialize.
 ;) I'll also wager that HipHop makes the gap in speed between
 (un)serialize and json farther.

 Very well, r96585.
 
 
 
 -- 
 Niklas Laxström
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Requests-about-log-system-tp32396608p32434885.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Re quests about log system

2011-09-08 Thread Aaron Schulz

Yay for log_params. I was thinking JSON would be appropriate here, so I'm
glat to see that.

I'll toss these revs onto my review queue.


Bugzilla from niklas.laxst...@gmail.com wrote:
 
 I just commited many changes to logging code. There is more to come,
 but I think this is suitable place to write in more detail what is
 going on. I'd also like to request code review and testing :)
 
 Thus far I have committed new formatting code and small cleanups. Both
 LogEventsList and RecentChanges are using the formatters now.
 I haven't committed my last patch, which changes Title.php to generate
 log entries using my new code. That will also fix page histories and
 IRC feed, which use static version of log action text, which is
 generated together when the new log item is inserted into the
 database.
 
 There are two major parts in the new logging system: LogEntry and
 LogFormatter.
 LogEntry is a model around one log entry. It has multiple subclasses.
 For constructing new log entries, you will create a new ManualLogEntry
 and fill necessary info, after which you can call insert() and
 publish(). If you are loading entries from database, you can simply
 call DatabaseLogEntry::newFromRow( $row ). It supports rows both from
 logging and recentchanges table. Usually you want to go directly to
 LogFormatter and call newFromEntry or the hand newFromRow shortcut.
 LogFormatter provides getActionText() method, which formats the log
 action for you, taking those pesky LogPage::DELETED_FOO restrictions
 into account. The action text includes the username, to support
 different word orders. There is also getPlainActionText(), which
 formats the log entry so that it is suitable for page histories and
 IRC feeds.
 
 LogEntries can have parameters. Parameters should be an associative
 array. When saved to database, it is encoded to JSON. If you can pass
 parameters directly to the message which is used to format the action
 text, you can name the keys like #:foobar, where # is a number and
 should start from 4, because parameters 1, 2 and 3 are already
 reserved and should be common to all log entries. Those are user name
 link, username for gender and target page link respectively.
 
 If they key is not in #:foobar format, it is not automatically
 available for the action text message. By subclassing LogFormatter you
 can do whatever you want with the parameters. Be aware of
 $this-plaintext value though, it indicates whether we can use any
 markup or just plaintext. This is how the MoveLogFormatter is
 registered. I've added a type/* shortcut to avoid some repetition. If
 the value is an existing class, it will be used. Otherwise the old
 behavior of calling the function is used through LegacyLogFormatter.
 
 $wgLogActionsHandlers = array(
   // move, move_redir
   'move/*' = 'MoveLogFormatter',
 );
 
 So what does this all bring to us?
 * Flexible word order
 * The most complex piece of log formatting is done only once, and it
 also takes care of hiding any restricted items
 * Gender is supported
 * Ability to store parameters as an associative array
 * New message naming conventions to reduce boilerplate
 * Anonymous users can make log entries, that are actually shown
 * Global logs should be easier to implement now, but it is not
 directly supported by the current code.
 * Two simple methods: getActionText and getPlainActionText, instead of
 the mess of making log entries all over the place
 * All code for one log type is now in single place, instead of lots of
 switch $type in different places.
 
 So once more, please text, review and comment. I still have lots to
 do, all the log types need to be converted one by one to the new
 system, to take the full benefit of improved i18n. Easiest way to find
 the commits is probably this page:
 http://www.mediawiki.org/wiki/Special:Code/MediaWiki/author/nikerabbit
 
   -Niklas
 
 -- 
 Niklas Laxström
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Requests-about-log-system-tp32396608p32422536.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Re quests about log system

2011-09-04 Thread Aaron Schulz

It would be nice to have standard functions that supports storing associative
arrays in log_params rather than fragile ordered lists. I ended up hacking
up a quick function in FlaggedRevs for this. Newer log types could make use
of this and existing ones could if they had some b/c code.


Bugzilla from niklas.laxst...@gmail.com wrote:
 
 Hello, I'm currently partially rewriting the log system, because the
 current one doesn't support i18n well enough.
 
 I'm trying to avoid any radical changes like changes to the database
 schema. My changes mostly touch
 handling log entries and formatting them.
 
 So, if you know any defects in the current log system, or have an wish
 what the new should do, or know someplace where these kind of wishes
 exist, please tell me.
 I have scanned the list of bugs in bugzilla quickly, but it is a bit
 hard to find relevant bugs when there is no logging component.
 
 I'm aiming to solve at least these bugs:
 https://bugzilla.wikimedia.org/30737 User names should be moved into
 log messages.
 https://bugzilla.wikimedia.org/24156 Messages of log entries should
 support GENDER
 https://bugzilla.wikimedia.org/24620 Log entries are difficult to
 localize; rewrite logs system
 https://bugzilla.wikimedia.org21716 Log entries assume sentence starts
 with username
 
   -Niklas
 -- 
 Niklas Laxström
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Requests-about-log-system-tp32396608p32397174.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Aaron Schulz now full-time at Wikimedia Foundation

2011-08-31 Thread Aaron Schulz

Thanks. Very much appreciated.


Platonides wrote:
 
 Those are good news (tm) both for Aaron, WMF and MediaWiki.
 Congratulations, Aaron
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Aaron-Schulz-now-full-time-at-Wikimedia-Foundation-tp32375064p32376976.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] On refactorings and huge changes

2011-07-26 Thread Aaron Schulz

Also, with some coordination, branches could be merged at a time when:
* Reviewers will looking at it and testing it *right* after the merge (in
addition to any branch review)
* The author is around to make fixes as it gets final review

People should try to be available after any large changes (from a branch or
from their local code). However, it may be the case that no one has the time
to specifically focus on reviewing a certain change set right after commit.
Maybe, for example, changes will only get seriously looked at a month after
the commit. If the author was available in the weeks after the commit, but
not when it gets serious review, then we still have a problem. This is also
were some advance notification and coordination could help (especially
looking at the examples Chad gave).


^demon wrote:
 
 All,
 
 While spending the past few days/weeks in CodeReview, it has become
 abundantly clear to me that we absolutely must get away from this idea
 of doing huge refactorings in our working copies and landing them in trunk
 without any warning. The examples I'm going to use here are the
 RequestContext, Action and Blocking refactors.
 
 We've gotten into a very bad habit recently of doing a whole lot of work
 in
 secret in our respective working copies, and then pushing to trunk without
 first talking to the community to discuss our plans. This is a bad idea
 for a
 bunch of reasons.
 
 Firstly, it skips the community feedback process until after your code is
 already in trunk. By skipping this process--whether it's a formal RfC, or
 just
 chatting with your peers on IRC--you miss out on the chance to get
 valuable
 feedback on your architectural decisions before they land in trunk. Once
 code
 has landed in trunk it is almost always easier to follow up and continue
 to
 fix the code that should've been fully spec'd out before checkin.
 
 Also, the community *must* have the chance to call you crazy and say
 don't
 check that in, please. Going back to my examples of Actions, had the
 community
 been consulted first I would've raised objections about the decisions made
 with
 Actions (I think they should be moved to special pages and the old action
 urls
 made to redirect for back-compat...rather than solidifying the old and
 crappy
 action interface with a new coat of paint). Looking at RequestContexts,
 had we
 talked about this in an RfC first...we probably could've skipped the whole
 member variable vs. accessor debate and the several months of minor
 cleanups
 that's entailed (__get() magic is evil, IMHO)
 
 Secondly, this increases the load on reviewers, myself included. When you
 land
 a huge commit in trunk (like the Block rewrite), it takes *forever* to
 review
 the original commit + half a dozen or more followups. This drains reviewer
 time
 and leads to longer release cycles. I think I speak for everyone when I
 say this
 is bad. Small incremental changes are infinitely easier to review than
 large
 batch changes.
 
 If you need to make huge changes: do them in a branch. It's what I did
 with the
 installer and maintenance rewrites, what Roan and Trevor did with
 ResourceLoader
 and what Brian Wolff did with his metadata improvements. Of course after
 landing
 your branches in trunk there will inevitably be some cleanup required, but
 it
 keeps trunk more stable until the branch merge and makes it easier to back
 out
 if we decide to scrap the feature/rewrite.
 
 I know SVN branches suck. But the alternative is having a constantly
 unstable
 trunk due to alpha code that was committed haphazardly. Nobody wins in
 that
 scenario.
 
 So please...I beg everyone. Discuss your changes first. It doesn't have to
 be
 formal (although formal spec'ing is always useful too!), but even having a
 second set of eyes to glance over your ideas before committing never hurts
 anyone.
 
 -Chad
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/On-refactorings-and-huge-changes-tp32143260p32143408.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New Employee Announcement - Jeff Green

2011-06-30 Thread Aaron Schulz

Congrats! Special Ops sounds cool by the way ;)


CT Woo wrote:
 
 All,
 
 Please join me to welcome Jeff Green to Wikimedia Foundation.
 
 Jeff is taking up the Special Ops position in the Tech Ops department
 where
 one of his responsibilities is to keep our Fundraising infrastructure
 secured, in compliance with regulation, scalable and highly available.
 Jeff
 comes with strong systems operation background especially in scaling
 and building highly secured infrastructure. He hails from Craiglist where
 he
 started as their first system administrator and served as their lead
 system
 administrator as well as their Operations manager, most of his tenure
 there.
 
 When not working, Jeff likes cycling, playing music, and building stuff.
 He
 is a proud father of two young kids and a lucky husband. He and his family
 will be moving back to Massachusetts this August. Please drop by next week
 to the 3rd floor to welcome him. For those who have already met him
 earlier,
 do come by as well to see the new 'ponytailess' Jeff ;-)
 
 Thanks,
 CT
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/New-Employee-Announcement---Jeff-Green-tp31968650p31971063.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Some changes to $wgOut, $wgUser, Skin, and SpecialPage code patterns

2011-04-04 Thread Aaron Schulz

I second the idea of the static Linker class. It's far better than the
subclass system. Skin modification of links should focus on CSS anyway,
rather than trying to overload link generating code.

Daniel Friesen-4 wrote:
 
 On 11-04-04 02:40 PM, Platonides wrote:
 I like it. Specially the Linker change. It really looks the way to have
 it.

 I'm considering
 making the Parser get it's linker via $po-getLinker(); (either
 ParserOutput or ParserOptions, I need another look)
 The linker would be an input parameter, so it is a ParserOptions
 Yeah, I just couldn't remember which set of code that was when I wrote 
 the e-mail...
 
 It's moot now anyways, since Linker is now used statically as Linker::* 
 instead.
 
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Some-changes-to-%24wgOut%2C-%24wgUser%2C-Skin%2C-and-SpecialPage-code-patterns-tp31305991p31320028.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] HipHop

2011-03-28 Thread Aaron Schulz

Two things:
(i) I'd really hope that subclassing would be very rare here. I don't think
this will be much of an issue though.
(ii) Also, it would be nice if developers could all have hiphop running on
their test wikis, so that code that's broken on hiphop isn't committed in
ignorance. The only problem is that, last time I checked, the dependency
list for hiphop is very considerable...and isn't for Windows yet. However, I
believe Domas didn't need *too* many patches to get MW working, which
suggests that having to write code that compiles with hiphop won't be that
difficult and error prone. If there can be a small yet complete list of
things that only work in regular PHP then that might be an OK alternative
to each dev running/testing hiphop.

Otherwise,


Tim Starling-2 wrote:
 
 I think we should migrate MediaWiki to target HipHop [1] as its
 primary high-performance platform. I think we should continue to
 support Zend, for the benefit of small installations. But we should
 additionally support HipHop, use it on Wikimedia, and optimise our
 algorithms for it.
 
 In cases where an algorithm optimised for HipHop would be excessively
 slow when running under Zend, we can split the implementations by
 subclassing.
 
 I was skeptical about HipHop at first, since the road is littered with
 the bodies of dead PHP compilers. But it looks like Facebook is pretty
 well committed to this one, and they have the resources to maintain
 it. I waited and watched for a while, but I think the time has come to
 make a decision on this.
 
 Facebook now write their PHP code to target HipHop exclusively, so by
 trying to write code that works on both platforms, we'll be in new
 territory, to some degree. Maybe that's scary, but I think it can work.
 
 Who's with me?
 
 -- Tim Starling
 
 [1] https://github.com/facebook/hiphop-php/wiki/
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/HipHop-tp31253551p31254438.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia schema changes

2011-03-01 Thread Aaron Schulz

It would be nice if FlaggedRevs/archives/patch-fi_img_timestamp.sql was run
on the wikis created before the patch.


Tim Starling-2 wrote:
 
 If there are any schema changes you want done on Wikimedia in the next
 batch, let me know. I have the following patch files queued up, to be
 run in the next few days:
 
 * patch-rd_interwiki.sql
 * patch-categorylinks-better-collation.sql
 
 -- Tim Starling
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Wikimedia-schema-changes-tp31038197p31046841.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Re sourceLoader + Windows + PHP bug 47689

2011-01-29 Thread Aaron Schulz

That's much better than using editbin :)

I added the following:
IfModule mpm_winnt_module
ThreadStackSize 8388608
/IfModule

The stack overflow issue is gone now. This should be in the MW.org apache
config docs and elsewhere (especially for 1.17, since nothing beforehand was
running into this limit).


Platonides wrote:
 
 The default thread stack for Apache binary is 256Kb [1]
 However, apr_thread_create() allows to use a different stack size
 (apr_threadattr_stacksize_set).
 The value used is stored in the global variable ap_thread_stacksize
 which can be set in ThreadStackSize at httpd.conf
 http://httpd.apache.org/docs/2.2/mod/mpm_common.html#threadstacksize
 
 Can you confirm that increasing it fixes your problem?
 
 It surprises me that Pierre recommended tweaking the PE header instead
 of the config option.
 
 
 [1] Here are the relevant fields for
 httpd-2.2.17-win32-x86-openssl-0.9.8o.msi
   4 size of stack reserve
1000 size of stack commit
  10 size of heap reserve
1000 size of heap commit
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/ResourceLoader-%2B-Windows-%2B-PHP-bug-47689-tp30792236p30796148.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Re sourceLoader + Windows + PHP bug 47689

2011-01-28 Thread Aaron Schulz

In JavaScriptDistiller,  inside of the createParser() function, we have:
$parser-add( '/\\/\\*(.|[\\r\\n])*?\\*\\//' );

It took me hours to track down that this was causing Apache 2.1.11 to crash
on nearly any page view on my test wiki. This happened when a large JS
bundle is loaded, such as:
load.php?debug=falselang=enmodules=jquery.checkboxShiftClick|jquery.client|jquery.cookie|jquery.makeCollapsible|jquery.placeholder|mediawiki.action.watch.ajax|mediawiki.language|mediawiki.legacy.ajax|mediawiki.legacy.diff|mediawiki.legacy.mwsuggest|mediawiki.legacy.wikibits|mediawiki.utilskin=vectorversion=20110129T005517Z

I made a simple php file to reproduce this. It crashes when viewed over
apache but not CLI. It appears to be http://bugs.php.net/bug.php?id=47689
(which uses a similar regex).

Is something worth adding a note somewhere about or tweaking some code?
-- 
View this message in context: 
http://old.nabble.com/ResourceLoader-%2B-Windows-%2B-PHP-bug-47689-tp30792236p30792236.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Fwd: Re: [Mediawiki-l] vector components many blank lines

2010-12-09 Thread Aaron Schulz

+1 to this. Lets focus more on the changes and ideas and less on the authors.

IMO, when I keep seeing things like he did/he changed and [so and so]
made it so that instead of things like the change made it throws up red
flags. Just discuss the change, and mention the person minimally (to help
identify the changes or get the attention of person X) or not at all. I've
also seen a lot of things like misguided and bad idea by some people.
This raises red flags too. Just discuss *what* the problems are, rather than
saying this decision sucks.

I've seen this pattern by more than one person.

Tim Starling-2 wrote:
 
 On 08/12/10 03:11, Trevor Parscal wrote:
 These blank lines should not - under any circumstances - be here. But I
 
 do know why they are...
 
 Tim Starling modified the standard distribution of JSMin[1] in some good
 and some bad ways. These blank lines are the result of one of these
 modifications which I find to be misguided. He's basically only
 compressed horizontal white-space, leaving new line characters in place.
 The blank lines you see are where the comments used to be.
 
 I have made this point before, clearly upon deaf ears - but I will make
 it again.
 
 You could have just said because Tim thought it would make debugging
 easier, and left out all the insults: misguided, deaf ears, etc.
 We've each given our opinions on this issue previously, the only thing
 you've added here is a dollop of incivility.
 
 Your bullying has not changed my position. I think this is a minor
 issue, and I have better things to do than to argue about it. I don't
 intend on doing any more work on JSMin for the time being. Feel free
 to make the relevant change yourself.
 
 -- Tim Starling
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/Fwd%3A-Re%3A--Mediawiki-l--vector-components-many-blank-lines-tp30397813p30418771.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] re introduce myself

2010-10-23 Thread Aaron Schulz

Is this your triumphant return? :)


Ashar Voultoiz-4 wrote:
 
 Hello,
 
 Just a quick message to reintroduce myself to people who might be 
 wondering who is this new committer.
 
 I am from France and discovered Wikipedia in 2002.  Getting interested 
 in bug fixing, I have eventually been granted commit access by Tim or 
 Brion back in 2003 or 2004.
 
 I haven't contributed a lot of code but have an overall knowledge of 
 MediaWiki.  I mostly fixed funny bugs, converted double quotes to single 
 quotes and occasionally synced stuff to live (read: blank page on live 
 site).
 
 I am back around since a few weeks and willing to contribute again to 
 MediaWiki development.  I have no aim in particular beside having fun 
 and meeting some new people.  My area of interests are in no special order
 :
   - parser (still have to understand Tim's preprocessing stuff)
   - ajax features
   - testing
   - IPv6
 
 My secret project is to migrate to git.
 
 I beg your pardon for my very basic english :^b
 
 
 I got a short user page at :
http://en.wikipedia.org/wiki/User:Hashar
 My main page is on the french wikipedia (french language only) :
http://fr.wikipedia.org/wiki/Utilisateur:Hashar
 
 
 
 -- 
 Ashar hashar Voultoiz
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 

-- 
View this message in context: 
http://old.nabble.com/reintroduce-myself-tp30023073p30035544.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Cruel and unusual abuse of DNS to make a command-line Wikipedia search

2009-08-04 Thread Aaron Schulz

Clever :)

-Aaron Schulz



 Date: Tue, 4 Aug 2009 09:48:28 +0100
 From: dger...@gmail.com
 To: wikitech-l@lists.wikimedia.org
 Subject: [Wikitech-l] Cruel and unusual abuse of DNS to make a command-line   
 Wikipedia search
 
 http://lifehacker.com/5329014/search-wikipedia-from-the-command-line
 
 
 - d.
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

_
Get your vacation photos on your phone!
http://windowsliveformobile.com/en-us/photos/default.aspx?OCID=0809TL-HM
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] flagged revisions

2009-06-19 Thread Aaron Schulz

Some issue were waiting on the big scap, which has since happened thanks to Tim 
(along with the syncing of fixes).

-Aaron Schulz


 
 Date: Sat, 20 Jun 2009 08:47:03 +1000
 From: thepmacco...@gmail.com
 To: wikitech-l@lists.wikimedia.org
 CC: foundatio...@lists.wikimedia.org
 Subject: Re: [Wikitech-l] flagged revisions
 
 Hi all,
 It's been 10 days since the last note on flagged revisions, which is
 sufficiently important to warrant a follow up at this point in my view. I'll
 try and focus the questions a bit in order not to pester, but with the
 intention of helping things forward;
 see https://bugzilla.wikimedia.org/show_bug.cgi?id=18244 for bug details.
 * Flagged Revisions is approved for use on the English Wikipedia, my
 understanding is that there really isn't that much technical work still to
 do on the extension - is this true?
 * Is there anything a regular editor such as myself can do to help
 prioritise this in the hearts, minds and fingers of our wonderful
 developers?
 * Personally, I believe this function to be one of the most important
 matters before the foundation currently, I further believe that this view is
 relatively widely held (and sure, widely reviled too - but this is a wiki,
 right!) - I've copied foundation-l in on this note with the intention of
 further general discussion occurring there, and bug-specific chat only on
 the wiki-tech list, I hope this is an appropriate use of resources :-)
 I've offered appreciation, a dollop of charm, and a little bit of money to
 try and keep this moving forward I'm not sure I'm above offering sex, so
 please throw me a bone for the sake of the decorum of these lists, if
 nothing else :-)
 best,
 Peter,
 PM.
 
 On Wed, Jun 10, 2009 at 12:46 PM, Gregory Maxwell gmaxw...@gmail.comwrote:
 
  Am I confused or didn't enwp approved flagged revisions, but then it
  was held up due to purely technical reasons ... what is this crap
  now?
 
  -- Forwarded message --
  From: K. Peachey p858sn...@yahoo.com.au
  Date: Tue, Jun 9, 2009 at 10:29 PM
  Subject: Re: [Wikitech-l] flagged revisions
  To: Wikimedia developers wikitech-l@lists.wikimedia.org
 
 
  On Wed, Jun 10, 2009 at 12:08 PM, private musingsthepmacco...@gmail.com
  wrote:
   with apologies for re-vitalising a slightly old thread -I have a couple
  of
   follow ups, which it'd be great to try and make some progress on
   My understanding is that Aaron (whom I haven't 'met' - so hello!) has
   completed work on a test configuration of flagged revisions - I hope it's
   appropraite for me to ask directly on this list whether or not Aaron
   considers this development complete? (my understanding is that the
  extension
   is pretty much ready to go?)
   There is understandably considerable interest in the timeframe for
   installing flagged revisions, I would hope it would be a positive step to
   set some timeframes a bit tighter than 'hopefully by wikimedia' ;-) - is
   this list an appropriate context for such discusison, and if so
  (hopefullly)
   - could someone appropriately empowered flesh out the next steps a bit
  more,
   and maybe try and establish a timetable of sorts?
   My intention in posting about this every so often is to ensure that such
  an
   important development doesn't sort of slip through the cracks - I think
   communication on this matter has to date been ok, but not great - it'll
  be
   cool to improve it a bit :-)
   cheers,
   Peter,
   PM.
  The implementations depend on a per wiki basis depending on consensus,
  for example, wikinews and a few others such as the German Wikipedia
  already run it.
 
  The en.wiki is currently also looking at a slightly modified version
  nicked named Flagged Protections which is basically designed to work
  the same way protection does, articles are only covered by it when
  protected to a certain level.
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

_
Microsoft brings you a new way to search the web.  Try  Bing™ now
http://www.bing.com?form=MFEHPGpubl=WLHMTAGcrea=TEXT_MFEHPG_Core_tagline_try 
bing_1x1
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Progress on the Flagged Revisions front

2009-03-30 Thread Aaron Schulz
Also note that the 'patrolled revisions' aspect is particularly messy to 
implement. Some changes have been made but more are still needed to the 
extension. I'm tempted to split off patrolling into its own code other 
tables.

-Aaron

--
From: Brion Vibber br...@wikimedia.org
Sent: Monday, March 30, 2009 1:02 PM
To: Wikimedia developers wikitech-l@lists.wikimedia.org
Subject: Re: [Wikitech-l] Progress on the Flagged Revisions front

 On 3/29/09 3:32 AM, private musings wrote:
 Because I'm an idiot, I tried to send this before subscribing to this
 particular list. I don't have high hopes of understanding much herein,
 but hope the below is clear :-)

 It's all in the queue; further code and UI cleanup is ongoing.

 -- brion

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l