[Wikitech-l] Fwd: [opentranslation] OTT09: Call for Participants!

2009-04-23 Thread Brianna Laugher
Hi,

I think this event will be of great interest and relevance to a number
of MediaWiki developers. I hope we can have a significant presence at
this year's event, as compared to the first one in 2007
(http://www.aspirationtech.org/events/opentranslation).

cheers
Brianna


-- Forwarded message --
From: Allen Gunn gun...@aspirationtech.org
Date: 2009/4/23
Subject: [opentranslation] OTT09: Call for Participants!
To: Open Translation Tools Discussion List
opentranslat...@lists.aspirationtech.org


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hey friends,

We'll be doing broader announcements later this week, but I just wanted
to let folks know that we're ready to know who's coming to Amsterdam in
June!

Please spread the word on all fronts! Blog it, tweet it, etc!

http://aspirationtech.org/events/opentranslation/2009

peace,
gunner

- --
Allen Gunn
Executive Director, Aspiration
+1.415.216.7252
www.aspirationtech.org

Aspiration: Better Tools for a Better World
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknwFVoACgkQTGSEj6og3Pd+iwCfcCjiGZSs0l2QAABL8Qihx+c/
aFwAn2englpVKCSKAhYF2B/Do1Tb9JU5
=iFhC
-END PGP SIGNATURE-

You received this message as a subscriber on the list:
opentranslat...@lists.aspirationtech.org
To be removed from the list, send any message to:
opentranslation-unsubscr...@lists.aspirationtech.org

For all list information and functions, see:
http://lists.aspirationtech.org/lists/info/opentranslation



-- 
They've just been waiting in a mountain for the right moment:
http://modernthings.org/

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] On extension SVN revisions in Special:Version

2009-04-23 Thread Mark Clements (HappyDog)

Chad innocentkil...@gmail.com wrote in message 
news:5924f50a0904220554n32c3a4ecrd1cbc8cebcd74...@mail.gmail.com...
 On Wed, Apr 22, 2009 at 7:12 AM, Mark Clements (HappyDog)
 gm...@kennel17.co.uk wrote:

[Description of WikiDB implementation of revision handling SNIPPED]

 Perhaps we should add a GetCredits hook, to be called on 
 Special:Version
 in order to get the credits info for the extension? If the hook is not
 found, or returns false, then the info in $wgExtensionCredits for that
 extension is used, otherwise the array returned from the function (which 
 is
 in th same format) will be used instead. This would mean that the 
 extension
 could use this function to include() all available files in order to get 
 the
 revision number, but wouldn't need to include them on normal pages (thus
 avoiding the performance hit). Wouldn't solve the problem of non-PHP 
 files
 being updated, but would solve the rest.


 Not sure it's worth it :-\ What's wrong with just giving version numbers
 that make sense, rather than relying on the revision number which isn't
 indicative of anything?

It means a lot more admin overhead, having to update a version file 
(wherever it is) whenever you make a change, and also that you may often 
forget to bump the version number, particularly when making minor tweaks. 
This is not so much of a problem when you are in charge of the release 
schedule, as you can build a version number update into your release 
process, but if the code is being pulled in real-time for a repository, then 
this doesn't work as each individual revision needs to be considered a 
separate version and if you forget to update the version with every single 
commit, then the versioning becomes useless.  This is really something that 
should be automated - humans are rubbish at this kind of thing!

For WikiDB I have adopted a simple approach.  The version number is 
incremented when there are potentially incompatible changes made (or changes 
that require an update script to be run) and the revision is always the 
latest repository revision of the code.  I am currently on v2 r177, for 
example.  The revision number is not indicative of much, but it does 
uniquely identify a release and it does tell you whether you are running the 
latest version, and to be honest these two numbers tell you pretty much 
anything you need to know about the extension's version.  The only problem 
is making sure that the revision is always up-to-date, which is solved (at 
least in this case) by the method I described.

- Mark Clements (HappyDog)



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] New preferences system

2009-04-23 Thread Andrew Garrett
I've branch-merged the new preferences system that I've spent the last
few weeks developing.

On the outside, you probably won't notice any difference except a few
bugfixes, but the internals have undergone a complete rewrite.

All of the actual preference definitions and utility functions have
been separated out into Preferences.php, which holds all business
logic for the new system. The UI and submission logic for the system
is done in SpecialPreferences.php, which, now only a hundred lines
long, wraps a generic class I've written to encourage separation of
business and UI logic called 'HTMLForm'.

The advantage of this clear separation is that writing an API module
is very simple, and it can be called internally, too!

Extensions must now hook GetPreferences instead of the existing hooks
(which were too low-level to maintain compatibility with), I've
updated all extensions used on Wikimedia. This new hook allows you to
put preferences wherever you want, and a new preference can be added
in less than ten lines of code, rather than the hundred-line nightmare
that was required in the previous iteration.

I'd like to look towards trimming some of the existing preferences
that are no longer relevant, and adding new preferences as common
sense dictates.

Feedback, praise and criticism regarding the changes is certainly welcome!

-- 
Andrew Garrett
Sent from Sydney, Nsw, Australia

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Google Summer of Code: accepted projects

2009-04-23 Thread Wu Zhe
Michael Dale md...@wikimedia.org writes:

 I recommended that the image daemon run semi-synchronously since the
 changes needed to maintain multiple states and return non-cached
 place-holder images while managing updates and page purges for when the
 updated images are available within the wikimedia server architecture
 probably won't be completed in the summer of code time-line. But if the
 student is up for it the concept would be useful for other components
 like video transformation / transcoding, sequence flattening etc. But
 its not what I would recommend for the summer of code time-line.

I may have problems understanding the concept semi-synchronously, does
it mean when MW parse a page that contains thumbnail images, the parser
sends requests to daemon which would reply twice for each request, one
immediately with a best fit or a place holder (synchronously), one later
on when thumbnail is ready (asynchronously)?

 == what would probably be better for the image resize efforts should
 focus on ===

 (1) making the existing system more robust and (2) better taking
 advantage of multi-threaded servers.

 (1) right now the system chokes on large images we should deploy
 support for an in-place image resize maybe something like vips (?)
 (http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use)
 The system should intelligently call vips to transform the image to a
 reasonable size at time of upload then use those derivative for just
 in time thumbs for articles. ( If vips is unavailable we don't
 transform and we don't crash the apache node.)

Wow, vips sounds great, still reading its documentation. How is its
performance on relatively small size (not huge, a few hundreds pixels in
width/height) images compared with traditional single threaded resizing
programs?

 (2) maybe spinning out the image transform process early on in the
 parsing of the page with a place-holder and callback so by the time
 all the templates and links have been looked up the image is ready for
 output. (maybe another function wfShellBackgroundExec($cmd,
 $callback_function) (maybe using |pcntl_fork then normal |wfShellExec
 then| ||pcntl_waitpid then callback function ... which sets some var
 in the parent process so that pageOutput knows its good to go) |

Asynchronous daemon doesn't make much sense if page purge occurs on
server side, but what if we put off page purge to the browser? It works
like this:

1. mw parser send request to daemon
2. daemon finds the work non-trivial, reply *immediately* with a best
   fit or just a place holder
3. browser renders the page, finds it's not final, so sends a request to
   daemon directly using AJAX
4. daemon reply to the browser when thumbnail is ready
5. browser replace temporary best fit / place holder with new thumb
   using Javascript

Daemon now have to deal with two kinds of clients: mw servers and
browsers.

Letting browser wait instead of mw server has the benefit of reduced
latency for users while still have an acceptable page to read before
image replacing takes place and a perfect page after that. For most of
users, it's likely that the replacing occurs as soon as page loading
ends, since transfering page takes some time, and daemon would have
already finished thumbnailing in the process.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Google Summer of Code: accepted projects

2009-04-23 Thread Wu Zhe
Michael Dale md...@wikimedia.org writes:

 I recommended that the image daemon run semi-synchronously since the
 changes needed to maintain multiple states and return non-cached
 place-holder images while managing updates and page purges for when the
 updated images are available within the wikimedia server architecture
 probably won't be completed in the summer of code time-line. But if the
 student is up for it the concept would be useful for other components
 like video transformation / transcoding, sequence flattening etc. But
 its not what I would recommend for the summer of code time-line.

I may have problems understanding the concept semi-synchronously, does
it mean when MW parse a page that contains thumbnail images, the parser
sends requests to daemon which would reply twice for each request, one
immediately with a best fit or a place holder (synchronously), one later
on when thumbnail is ready (asynchronously)?

 == what would probably be better for the image resize efforts should
 focus on ===

 (1) making the existing system more robust and (2) better taking
 advantage of multi-threaded servers.

 (1) right now the system chokes on large images we should deploy
 support for an in-place image resize maybe something like vips (?)
 (http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use)
 The system should intelligently call vips to transform the image to a
 reasonable size at time of upload then use those derivative for just
 in time thumbs for articles. ( If vips is unavailable we don't
 transform and we don't crash the apache node.)

Wow, vips sounds great, still reading its documentation. How is its
performance on relatively small size (not huge, a few hundreds pixels in
width/height) images compared with traditional single threaded resizing
programs?

 (2) maybe spinning out the image transform process early on in the
 parsing of the page with a place-holder and callback so by the time
 all the templates and links have been looked up the image is ready for
 output. (maybe another function wfShellBackgroundExec($cmd,
 $callback_function) (maybe using |pcntl_fork then normal |wfShellExec
 then| ||pcntl_waitpid then callback function ... which sets some var
 in the parent process so that pageOutput knows its good to go) |

Asynchronous daemon doesn't make much sense if page purge occurs on
server side, but what if we put off page purge to the browser? It works
like this:

1. mw parser send request to daemon
2. daemon finds the work non-trivial, reply *immediately* with a best
   fit or just a place holder
3. browser renders the page, finds it's not final, so sends a request to
   daemon directly using AJAX
4. daemon reply to the browser when thumbnail is ready
5. browser replace temporary best fit / place holder with new thumb
   using Javascript

The daemon now have to deal with two kinds of clients: mw servers and
browsers.

Letting browser wait instead of mw server has the benefit of reduced
latency for users while still have an acceptable page to read before
image replacing takes place and a perfect page after that. For most of
users, it's likely that the replacing occurs as soon as page loading
ends, since transfering page takes some time, and daemon would have
already finished thumbnailing in the process.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Google Summer of Code: accepted projects

2009-04-23 Thread Wu Zhe
Michael Dale md...@wikimedia.org writes:

 I recommended that the image daemon run semi-synchronously since the
 changes needed to maintain multiple states and return non-cached
 place-holder images while managing updates and page purges for when the
 updated images are available within the wikimedia server architecture
 probably won't be completed in the summer of code time-line. But if the
 student is up for it the concept would be useful for other components
 like video transformation / transcoding, sequence flattening etc. But
 its not what I would recommend for the summer of code time-line.

I may have problems understanding the concept semi-synchronously, does
it mean when MW parse a page that contains thumbnail images, the parser
sends requests to daemon which would reply twice for each request, one
immediately with a best fit or a place holder (synchronously), one later
on when thumbnail is ready (asynchronously)?

 == what would probably be better for the image resize efforts should
 focus on ===

 (1) making the existing system more robust and (2) better taking
 advantage of multi-threaded servers.

 (1) right now the system chokes on large images we should deploy
 support for an in-place image resize maybe something like vips (?)
 (http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use)
 The system should intelligently call vips to transform the image to a
 reasonable size at time of upload then use those derivative for just
 in time thumbs for articles. ( If vips is unavailable we don't
 transform and we don't crash the apache node.)

Wow, vips sounds great, still reading its documentation. How is its
performance on relatively small size (not huge, a few hundreds pixels in
width/height) images compared with traditional single threaded resizing
programs?

 (2) maybe spinning out the image transform process early on in the
 parsing of the page with a place-holder and callback so by the time
 all the templates and links have been looked up the image is ready for
 output. (maybe another function wfShellBackgroundExec($cmd,
 $callback_function) (maybe using |pcntl_fork then normal |wfShellExec
 then| ||pcntl_waitpid then callback function ... which sets some var
 in the parent process so that pageOutput knows its good to go) |

Asynchronous daemon doesn't make much sense if page purge occurs on
server side, but what if we put off page purge to the browser? It works
like this:

1. mw parser send request to daemon
2. daemon finds the work non-trivial, reply *immediately* with a best
   fit or just a place holder
3. browser renders the page, finds it's not final, so sends a request to
   daemon directly using AJAX
4. daemon reply to the browser when thumbnail is ready
5. browser replace temporary best fit / place holder with new thumb
   using Javascript

The daemon now have to deal with two kinds of clients: mw servers and
browsers.

Letting browser wait instead of mw server has the benefit of reduced
latency for users while still have an acceptable page to read before
image replacing takes place and a perfect page after that. For most of
users, it's likely that the replacing occurs as soon as page loading
ends, since transfering page takes some time, and daemon would have
already finished thumbnailing in the process.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l