Re: [Wikitech-l] speed of Vector in en.wikipedia

2010-05-21 Thread Magnus Manske
On Fri, May 21, 2010 at 2:04 AM, K. Peachey p858sn...@yahoo.com.au wrote:
 For others interested, it's now been filed in bugzilla:
 https://bugzilla.wikimedia.org/show_bug.cgi?id=23612

Forgot my bugzilla login, so here:
[[The Holocaust]]
Google Chrome 5.0.375.38 beta on Windows XP
Vector : 6.4 +- 0.2 sec
Monobook : 5.5 +- 0.1 sec

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wiza rd, uploadWizard, timed media player )

2010-05-21 Thread Tisza Gergo
Aryeh Gregor Simetrical+wikilist at gmail.com writes:

 At the very least, newlines should be preserved, so you can get a line
 number when an error occurs.  Stripping other whitespace and comments
 is probably actually be worth the performance gain, from what I've
 heard, annoying though it may occasionally be.  Stripping newlines is
 surely not worth the added debugging pain, on the other hand.
 (Couldn't you even make up for it by stripping semicolons?)

Stripping semicolons is dangerous, see e.g.
http://stackoverflow.com/questions/444080/do-you-recommend-using-semicolons-after-every-statement-in-javascript/1169596#1169596
(not to mention you would have to differentiate between statement separators and
argument separators in for constructs)

I agree though that the small bandwidth gain of stripping newlines is not worth
the pain. Even if minification can be turned off manually, it would make user
reports near useless, and some javascript errors are not easy to reproduce.

Combining files has a negative side effect too: since errors stop thread
execution, once something breaks, everything breaks (without combining,
execution would have continued on the next file). This could be mitigated
somewhat by wrapping code sections in try-catch blocks (syntax errors cannot be
caught, but anything else can).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-21 Thread Dan Nessett
On Mon, 17 May 2010 20:16:35 +, Dan Nessett wrote:

 On Mon, 17 May 2010 19:11:21 +, Dan Nessett wrote:
 
 During the meeting last Friday, someone (I sorry, I don't remember who)
 mentioned he had created a test that runs with the currently checked in
 selenium code. Is that test code available somewhere (it doesn't appear
 to be in the current revision)?
 
 I found the answer. On the SeleniumFramework page is a pointer to a
 worked example (see: http://www.mediawiki.org/wiki/
 SeleniumFramework#Working_example). The instructions for getting the
 tests to work aren't totally transparent. The test file you include is:
 
 ../phase3/extensions/PagedTiffHandler/selenium/
PagedTiffHandler_tests.php
 
 (Not: ../phase3/extensions/PagedTiffHandler/tests/
 PagedTiffHandlerTest.php)
 
 Also, the instructions in SOURCES.txt specify getting all of the test
 images from:
 
 http://www.libtiff.org/images.html
 
 But, when accessing the URL supplied on that page for the images (ftp://
 ftp.remotesensing.org/pub/libtiff/pics-3.6.1.tar.gz) a FILE NOT FOUND
 error is returned. There is a new version of the pics file in ..libtiff,
 but they do not contain the correct images. The correct URL is: ftp://
 ftp.remotesensing.org/pub/libtiff/old/pics-3.6.1.tar.gz. However, this
 tar file does not include the images required by the PagedTiffHandler
 tests.

I thought I would apprise readers of this thread that I have the 
PagedTiffHandler example working. The problem turned out to be a bug in 
the Framework that meant LocalSeleniumSettings wasn't being read at the 
correct point in the startup sequence. According to Markus, he has fixed 
this bug and is now testing it. I presume he will let us know when the 
fix is committed.

If you want to run the example before the fix is available, just make all 
your configuration settings to RunSeleniumTests.php.

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-21 Thread Aryeh Gregor
On Thu, May 20, 2010 at 3:20 PM, Michael Dale md...@wikimedia.org wrote:
 I like the idea of a user preference. This way you don't constantly have
 to add debug to the url, its easy to tests across pages and is more
 difficult for many people to accidentally invoke it.

It also means that JavaScript developers will be consistently getting
served different code from normal users, and therefore will see
different sets of bugs.  I'm unenthusiastic about that prospect.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] importing wikimedia dumps using mwdumper

2010-05-21 Thread Nathan Day
Hi everyone,

I am new to the mailing list. I joined because I am seeking some help with a
project I am working on, and I hope that I contribute to solving other
peoples problems as well.

I have been trying to set up a mirror site of wiktionary on my mac and have
been running onto some difficulties. Here is what I am using:

Product Version
MediaWiki 1.15.3
PHP 5.2.12 (apache2handler)
MySQL 5.1.46

Extensions
Cite (Version r47190)
ParserFunctions (Version 1.1.1)
Winter (Wiki INTERpreter) (Version 2.2.0)

The process that I have for importing the database dumps is:

download pages_articles.xml.gz2 and other .sql link tables files for
wiktionary

run mwdumper.jar file, which I have built from the source code, on the xml
file

run mysql with all sql files

run rebuildall.php to rebuild link tables

Here are the issues that I am having:

1. Articles are succesfully imported into the database, but not viewable in
mediawiki... such that i can find an article in the db for dog for
example, but I can't see that article when I enter the corresponding url

2. For the articles that do show up, the templates are not transcluding. For
an article that has Template:Hat for example, where I can see that the page
exists in the db, mediawiki is acting like the template doesn't exist.

If anyone has any experience with these kinds of problems or importing
database dumps in general, your help would be much appreciated. Thank you!

Nathan Day
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] importing wikimedia dumps using mwdumper

2010-05-21 Thread Roan Kattouw
2010/5/21 Nathan Day nathanryan...@gmail.com:
 I have been trying to set up a mirror site of wiktionary
[snip]
 1. Articles are succesfully imported into the database, but not viewable in
 mediawiki... such that i can find an article in the db for dog for
 example, but I can't see that article when I enter the corresponding url

You probably need to set $wgCapitalLinks = false; . Wiktionary uses
this setting to be able to have page names with an uncapitalized first
letter. On your wiki, this is probably set to true (the default),
causing MediaWiki to miss the dog entry because it'll be looking for
Dog.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] importing wikimedia dumps using mwdumper

2010-05-21 Thread Nathan Day
Thanks for the solution Roan, that fixed the issue! Hip hip, hoorah and
three cheers for Roan.

On Fri, May 21, 2010 at 12:40 PM, Roan Kattouw roan.katt...@gmail.comwrote:

 2010/5/21 Nathan Day nathanryan...@gmail.com:
  I have been trying to set up a mirror site of wiktionary
 [snip]
  1. Articles are succesfully imported into the database, but not viewable
 in
  mediawiki... such that i can find an article in the db for dog for
  example, but I can't see that article when I enter the corresponding url
 
 You probably need to set $wgCapitalLinks = false; . Wiktionary uses
 this setting to be able to have page names with an uncapitalized first
 letter. On your wiki, this is probably set to true (the default),
 causing MediaWiki to miss the dog entry because it'll be looking for
 Dog.

 Roan Kattouw (Catrope)

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] speed of Vector in en.wikipedia

2010-05-21 Thread William Pietri
On 05/20/2010 09:15 AM, Amir E. Aharoni wrote:
 On various Vector feedback pages as well as on OTRS many people report that
 since the switch to Vector it takes significantly more time for Wikipedia
 pages to load.[...]

 Are there any more precise measurements?




I don't know how useful it is, but recently I helped a client build some 
JS-based, in-browser page load performance monitoring. It tracks various 
rendering events of a chosen percentage of pageviews. The only 
server-side code processes web server logs in batch, so it is pretty low 
impact, and works with cached pages. It's been served in a few hundred 
million pageviews with no obvious problems yet.

I think most of the server-side code is pretty particular to their 
needs, but if somebody wants it for Wikipedia, I'm sure they'd be 
willing to give up the client-side stuff and my rough-and-ready initial 
pass at the log parsing, which is in Ruby. If that's useful, let me know 
off-list and I'll ask 'em for permission.

William


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] speed of Vector in en.wikipedia

2010-05-21 Thread Roan Kattouw
2010/5/22 William Pietri will...@scissor.com:
 I don't know how useful it is, but recently I helped a client build some
 JS-based, in-browser page load performance monitoring. It tracks various
 rendering events of a chosen percentage of pageviews. The only
 server-side code processes web server logs in batch, so it is pretty low
 impact, and works with cached pages. It's been served in a few hundred
 million pageviews with no obvious problems yet.

 I think most of the server-side code is pretty particular to their
 needs, but if somebody wants it for Wikipedia, I'm sure they'd be
 willing to give up the client-side stuff and my rough-and-ready initial
 pass at the log parsing, which is in Ruby. If that's useful, let me know
 off-list and I'll ask 'em for permission.

I'm not sure we need this. I don't see a reason why one of us
usability developers can't just load pages, find out whether they're
slower, and where the slowness is. If the slowness is present for
everyone (many different people reporting slowness seems to indicate
that, and I have no reason to believe otherwise) you don't need to
jump through hoops to gather data from random users when you can
easily reproduce it yourself.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] speed of Vector in en.wikipedia

2010-05-21 Thread William Pietri
On 05/21/2010 03:39 PM, Roan Kattouw wrote:
 I'm not sure we need this. I don't see a reason why one of us
 usability developers can't just load pages, find out whether they're
 slower, and where the slowness is. If the slowness is present for
 everyone (many different people reporting slowness seems to indicate
 that, and I have no reason to believe otherwise) you don't need to
 jump through hoops to gather data from random users when you can
 easily reproduce it yourself.


I'm not sure you need it either; I just thought I'd offer what I had on 
hand. But the reason we built it was to quantify the problem so we could 
narrow down and prioritize issues.

There are a lot of variables in browser performance, and we found it 
frustrating to try to simulate various user conditions (OS versions, 
browser versions, physical location, bandwidth) and get solid, 
statistically valid measurements that we thought correlated well with 
what people were actually experiencing. After enough futzing with that, 
it was a relief to get some actual numbers rolling in automatically.

Last I heard they were going to set it up to graph aggregate client-side 
performance over time, so that they could easily see if their normal 
feature changes had unexpected browser performance impact. They want to 
solve the problems before users complain. So few of them do, especially 
about something subtle like performance.

William


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-21 Thread Michael Dale
Aryeh Gregor wrote:
 On Thu, May 20, 2010 at 3:20 PM, Michael Dale md...@wikimedia.org wrote:
   
 I like the idea of a user preference. This way you don't constantly have
 to add debug to the url, its easy to tests across pages and is more
 difficult for many people to accidentally invoke it.
 

 It also means that JavaScript developers will be consistently getting
 served different code from normal users, and therefore will see
 different sets of bugs.  I'm unenthusiastic about that prospect.
   
hmm ... if the minification resulted in different bugs than the raw 
code that would be a bug with the minification process and you will want 
to fix that minfication bug. You will want to know where the error 
occurred in the minfied code.

Preserving new-lines is a marginal error accessibility gain when your 
grouping many scripts, replacing all the comments with new lines, 
striping debug lines, and potentially shortening local scope variable 
names. Once you are going to fix an issue you will be fixing it in the 
actual code not the minified output, so you will need to recreate the 
bug with the non-minified output. 

In terms of all-things-being-equal compression wise using \n instead of 
semicolons consider:
a = b + c;
(d + e).print();

With \n instead of ; it will be evaluated as:
a = b + c(d + e).print();

We make wide use of parenthetical modularization, i.e all the jquery 
plugins do soemthing like:
(function($){ /* plugin code in local function scope using $ for 
jQuery */})(jQuery);
Initialization code above line with /\n/, ';' substitution will result 
in errors.


The use of a script-debug preference is for user-scripts development 
that is hosted live and developed on the server wiki pages. 
Development of core and extension javascript components should be tested 
locally in both minified and non-minified modes.

peace,
--michael



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-21 Thread Ilmari Karonen
On 05/22/2010 04:36 AM, Michael Dale wrote:

 Preserving new-lines is a marginal error accessibility gain when your
 grouping many scripts, replacing all the comments with new lines,
 striping debug lines, and potentially shortening local scope variable
 names. Once you are going to fix an issue you will be fixing it in the
 actual code not the minified output, so you will need to recreate the
 bug with the non-minified output.

The gain would be that most browsers only report the filename (URL) and 
line number where a JavaScript error occurs.  If you group all your 
scripts into one file with one line, you basically lose _all_ 
information about _where_ the error happens.

But I do think that the debug mode you added in r66703 is probably an 
adequate substitute, since, as you note, any bug that only occurs in the 
combined and minified code counts as a bug in the minifier.  At least as 
long as we actually have people able and willing to promptly fix any 
such bugs (including any regressions triggered by old/unusual browsers 
and custom user scripts) ASAP when they're found.

-- 
Ilmari Karonen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] speed of Vector in en.wikipedia

2010-05-21 Thread Mike.lifeguard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 37-01--10 03:59 PM, William Pietri wrote:
 They want to solve the problems before users complain. So few of them
 do, especially about something subtle like performance.

Luckily, our userbase loves complaining, and is good at it, so they do
it fairly often :)

- -Mike
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAkv3W9kACgkQst0AR/DaKHsgdwCdH8vTnenpCreJmEAx5RoxGXVu
WkIAoLsHgOLsN20QIKwj4qQI6oiCDhrV
=qtSO
-END PGP SIGNATURE-

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l