Re: [Wikitech-l] Bugzilla Weekly Report

2010-05-17 Thread Roan Kattouw
2010/5/17  repor...@isidore.wikimedia.org:
 Bugs marked FIXED      :  1622
 Bugs marked REMIND     :  1
 Bugs marked INVALID    :  501
 Bugs marked DUPLICATE  :  471
 Bugs marked WONTFIX    :  234
 Bugs marked WORKSFORME :  313
 Bugs marked LATER      :  19
 Bugs marked MOVED      :  0

[snip]
 Top 5 Bug Resolvers

 roan.kattouw [AT] gmail.com         19
 jeluf [AT] gmx.de                   15
 innocentkiller [AT] gmail.com       11
 sam [AT] reedyboy.net               6
 tparscal [AT] wikimedia.org         5

That can't be right.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla Weekly Report

2010-05-17 Thread Marco Schuster
On Mon, May 17, 2010 at 10:49 AM, Roan Kattouw roan.katt...@gmail.comwrote:

 2010/5/17  repor...@isidore.wikimedia.org:
  Bugs marked FIXED  :  1622

[snip]

  Top 5 Bug Resolvers
 
  roan.kattouw [AT] gmail.com 19
  jeluf [AT] gmx.de   15
  innocentkiller [AT] gmail.com   11
  sam [AT] reedyboy.net   6
  tparscal [AT] wikimedia.org 5
 
 That can't be right.

It can, because of the cleanup of Chad. Depending on how the reporter
software calculates, direct DB changes might confuse the counting
algorithms.

Marco
-- 
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Bugzilla Weekly Report

2010-05-17 Thread Chad
On Mon, May 17, 2010 at 5:23 AM, Marco Schuster
ma...@harddisk.is-a-geek.org wrote:
 On Mon, May 17, 2010 at 10:49 AM, Roan Kattouw roan.katt...@gmail.comwrote:

 2010/5/17  repor...@isidore.wikimedia.org:
  Bugs marked FIXED      :  1622

 [snip]

  Top 5 Bug Resolvers
 
  roan.kattouw [AT] gmail.com         19
  jeluf [AT] gmx.de                   15
  innocentkiller [AT] gmail.com       11
  sam [AT] reedyboy.net               6
  tparscal [AT] wikimedia.org         5
 
 That can't be right.

 It can, because of the cleanup of Chad. Depending on how the reporter
 software calculates, direct DB changes might confuse the counting
 algorithms.


That's exactly what I figured had happened as soon
as I saw this. I only wish it had attributed them all to
me so it looks like I resolved 3000+ bugs in about an
hour ;-)

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] image bundles' size?

2010-05-17 Thread Daniel Kinzler
James Salsman schrieb:
 How large would the projects' image bundles be uncompressed, if they
 were to exist?
 
 Also asked at:
 
 http://meta.wikimedia.org/wiki/Talk:Data_dumps#How_big_would_image_bundles_be_if_they_existed
 
 However, someone suggested I should be on wikitech-l more, so I
 thought I would try asking here.  I read here regularly, but I prefer
 the dogfood.  I promise to send the answer to the other place the
 question was asked if someone else doesn't do so first.

AS I said there:

Enwiki: 200 GB, Commons: 6 TB. roughly.

-- daniel



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] View deleted edit

2010-05-17 Thread church.of.emacs.ml
Hi,

 When I was try to view deleted 21,764 edit of sandbox, connection timeout.
 I can't check deleted edit. (Need not to restore sandbox.)
 http://ja.wikipedia.org/wiki/Wikipedia:%E8%B2%9D%E5%A1%9A/%E3%82%B5%E3%83%B3%E3%83%89%E3%83%9C%E3%83%83%E3%82%AF%E3%82%B9
 
 If I want to see them, what should I do?

Special:Undelete doesn't use a pager like in the page history, where
you can browse the revisions. Instead, all revisions are displayed – and
naturally, on pages which have too many deleted revisions, that request
fails.

You could probably use the API:
http://www.mediawiki.org/wiki/API:Query_-_Lists#deletedrevs_.2F_dr

Regards,

CoE



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] View deleted edit

2010-05-17 Thread Bryan Tong Minh
On Mon, May 17, 2010 at 6:41 PM, church.of.emacs.ml
church.of.emacs...@googlemail.com wrote:
 Hi,

 When I was try to view deleted 21,764 edit of sandbox, connection timeout.
 I can't check deleted edit. (Need not to restore sandbox.)
 http://ja.wikipedia.org/wiki/Wikipedia:%E8%B2%9D%E5%A1%9A/%E3%82%B5%E3%83%B3%E3%83%89%E3%83%9C%E3%83%83%E3%82%AF%E3%82%B9

 If I want to see them, what should I do?

 Special:Undelete doesn't use a pager like in the page history, where
 you can browse the revisions. Instead, all revisions are displayed – and
 naturally, on pages which have too many deleted revisions, that request
 fails.

Which is, obviously, a bug.


Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] View deleted edit

2010-05-17 Thread Platonides
 On Mon, May 17, 2010 at 6:46 PM, Bryan Tong Minh wrote:
 On Mon, May 17, 2010 at 6:41 PM, church.of.emacs.ml
 church.of.emacs...@googlemail.com wrote:
 Special:Undelete doesn't use a pager like in the page history, where
 you can browse the revisions. Instead, all revisions are displayed – and
 naturally, on pages which have too many deleted revisions, that request
 fails.

 Which is, obviously, a bug.
 
 
 Bryan

We should resolve bug 21279 and replace the archive table with rev_deleted.

https://bugzilla.wikimedia.org/show_bug.cgi?id=21279


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Visual impairment

2010-05-17 Thread Platonides
Tim Starling wrote:
 Audio CAPTCHAs, like visual CAPTCHAs, are not accessible for all
 people and do not conform to W3C accessibility guidelines. What's
 more, they're easier to crack than visual CAPTCHAs due to their
 one-dimensional nature. This is especially true if you use a public
 source dictionary of spoken phrases, against which an FFT correlation
 can be run.

Just as with image captchas, you'd need to introduce noise into it.

I have been trying flite, and didn't find the synthesized text too
understable by itself. :(

http://www.speech.cs.cmu.edu/flite/
http://festvox.org/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
During the meeting last Friday, someone (I sorry, I don't remember who) 
mentioned he had created a test that runs with the currently checked in 
selenium code. Is that test code available somewhere (it doesn't appear 
to be in the current revision)?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Lane, Ryan
 During the meeting last Friday, someone (I sorry, I don't 
 remember who) 
 mentioned he had created a test that runs with the currently 
 checked in 
 selenium code. Is that test code available somewhere (it 
 doesn't appear 
 to be in the current revision)?
 

Markus Glaser has Selenium tests checked in with his PagedTiffHandler
extension that run properly with the framework. See:

http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/PagedTiffHandler/
selenium/

V/r,

Ryan Lane
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Mon, 17 May 2010 19:11:21 +, Dan Nessett wrote:

 During the meeting last Friday, someone (I sorry, I don't remember who)
 mentioned he had created a test that runs with the currently checked in
 selenium code. Is that test code available somewhere (it doesn't appear
 to be in the current revision)?

I found the answer. On the SeleniumFramework page is a pointer to a 
worked example (see: http://www.mediawiki.org/wiki/
SeleniumFramework#Working_example). The instructions for getting the 
tests to work aren't totally transparent. The test file you include is:

../phase3/extensions/PagedTiffHandler/selenium/PagedTiffHandler_tests.php

(Not: ../phase3/extensions/PagedTiffHandler/tests/
PagedTiffHandlerTest.php)

Also, the instructions in SOURCES.txt specify getting all of the test 
images from:

http://www.libtiff.org/images.html

But, when accessing the URL supplied on that page for the images (ftp://
ftp.remotesensing.org/pub/libtiff/pics-3.6.1.tar.gz) a FILE NOT FOUND 
error is returned. There is a new version of the pics file in ..libtiff, 
but they do not contain the correct images. The correct URL is: ftp://
ftp.remotesensing.org/pub/libtiff/old/pics-3.6.1.tar.gz. However, this 
tar file does not include the images required by the PagedTiffHandler 
tests.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Mon, 17 May 2010 22:54:38 +0200, Markus Glaser wrote:

 Hi Dan,
 
 will provide a working example with no need to include any extensions in
 the course of this week. In the meantime, you might want to make sure
 that $wgSeleniumTiffTestUploads = false;
 in PagedTiffHandler_tests.php. Then, the test will not try to upload any
 of the pictures from libtiff. In order for the tests to succeed, you
 need to upload Multipage.tiff into the wiki. If there are any images
 missing, please let me know and I will send them to you. Actually, I
 didn't want to check in a third-party archive into the svn because of
 copyright considerations. The images seem to be public domain, but to
 me, it was not totally clear, whether they are. Are there any policies
 regarding this case? I assume, when there are more tests, especially
 with file uploads, the issue might arise again.
 
 Cheers,
 Markus

Thanks Markus,

$wgSeleniumTiffTestUploads does indeed equal false. I was failing on the 
upload of Multipage.tiff until I added 'tiff' to $wgFileExtensions. Now I 
am failing because allChecksOk is false. It appears this happens on line 
32 of PagedTiffHandler_tests.php on the statement:

if ($source != 'filetoc') $this-allChecksOk = false;

I'm not an image expert, so I don't know why this is happening.

Regards,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Deletion schema

2010-05-17 Thread Happy-melon
There's another discussion happening at enwiki at the moment about the 
stalled rollout of RevisionDelete for admins; which is backed up in the 
chain of bugs which boils down to our deletion mechanism is borked.

Reviewing the whole deletion mechanism was on the topic list for the last 
dev meetup, but AFAIK despite that event running for three times as long as 
it was expected to, it never got raised?  I think this would be as good a 
time as any to do so.  Do we have any clear idea or overall plan for page 
and revision deletion, the archive table, a page_deleted field, a 
deleted_page table, etc etc??

--HM 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-17 Thread Maciej Jaros
On 2010-05-14 05:44, Michael Dale wrote:
 If you have been following the svn commits you may have noticed a bit of
 activity on the js2 front.

 I wanted to send a quick heads up that describes what is going on and
 invite people to try things out, and give feedback.

 == Demos ==

 The js2 extension and associated extension are ruining on sandbox-9. If
 you view the source of a main page you can see all the scripts and css
 and grouped into associated buckets:
 [...]


So does this extensions encrypt JS files into being non-debugable? I 
could understand that on sites like Facebook but on an open or even Open 
site like Wikipedia/Mediawiki? This just seems to be wrong. Simple 
concatenation of files would serve the same purpose in terms of requests 
to the server.

Regards,
Nux.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Markus Glaser
Hi Dan,

the test fails at checking the prerequisites. It tries to load the image page 
and looks for a specific div element which is not present if the image was not 
uploaded correctly (id=filetoc). This might have changed across the versions of 
MediaWiki.

Did you install the PagedTiffHandler extension? It depends on ImageMagick, so 
it might have rejected the upload. Although then it should have produced an 
error message ;) So the other question is, which MediaWiki version do you run 
the tests on? 

Regards,
Markus

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-17 Thread Aryeh Gregor
On Mon, May 17, 2010 at 6:43 PM, Maciej Jaros e...@wp.pl wrote:
 So does this extensions encrypt JS files into being non-debugable? I
 could understand that on sites like Facebook but on an open or even Open
 site like Wikipedia/Mediawiki? This just seems to be wrong. Simple
 concatenation of files would serve the same purpose in terms of requests
 to the server.

At the very least, newlines should be preserved, so you can get a line
number when an error occurs.  Stripping other whitespace and comments
is probably actually be worth the performance gain, from what I've
heard, annoying though it may occasionally be.  Stripping newlines is
surely not worth the added debugging pain, on the other hand.
(Couldn't you even make up for it by stripping semicolons?)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Tue, 18 May 2010 01:04:01 +0200, Markus Glaser wrote:

 Hi Dan,
 
 the test fails at checking the prerequisites. It tries to load the image
 page and looks for a specific div element which is not present if the
 image was not uploaded correctly (id=filetoc). This might have changed
 across the versions of MediaWiki.
 
 Did you install the PagedTiffHandler extension? It depends on
 ImageMagick, so it might have rejected the upload. Although then it
 should have produced an error message ;) So the other question is, which
 MediaWiki version do you run the tests on?
 
 Regards,
 Markus

Hi Markus,

I am running on the latest version in trunk (1.17alpha r66296). There was 
no error when I uploaded the image. All of the extended details seem 
correct. I installed the extension. I don't have either exiv2 or vips 
installed, but according to the installation instructions these are 
optional.

Here are the configuration values I used:

# PagedTiffHandler extension
require_once($IP/extensions/PagedTiffHandler/PagedTiffHandler.php);

$wgTiffIdentifyRejectMessages = array(
'/^identify: Compression algorithm does not support random 
access/',
'/^identify: Old-style LZW codes, convert file/',
'/^identify: Sorry, requested compression method is not 
configured/',
'/^identify: ThunderDecode: Not enough data at scanline/',
'/^identify: .+?: Read error on strip/',
'/^identify: .+?: Can not read TIFF directory/',
'/^identify: Not a TIFF/',
);
$wgTiffIdentifyBypassMessages = array(
'/^identify: .*TIFFReadDirectory/',
'/^identify: .+?: unknown field with tag .+? encountered/'
);

$wgImageMagickIdentifyCommand = '/usr/bin/identify';
$wgTiffUseExiv = false;
$wgTiffUseVips = false;

// Maximum number of embedded files in tiff image
$wgTiffMaxEmbedFiles = 1;
// Maximum resolution of embedded images (product of width x height 
pixels)
$wgTiffMaxEmbedFileResolution = 2560; // max. Resolution 1600 x 1600 
pixels
// Maximum size of meta data
$wgTiffMaxMetaSize = 67108864; // 64kB

// TTL of Cacheentries for Errors
$wgTiffErrorCacheTTL = 84600;

Is there some way to use the wiki to look for the file property that is 
causing the problem?

Regards,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mobile Skin

2010-05-17 Thread Mark Clements (HappyDog)
Jon Davis w...@konsoletek.com wrote in message 
news:aanlktimgoq9mbznnrkdupx0lsyknozjwisepyksbx...@mail.gmail.com...
 On Fri, May 14, 2010 at 16:24, June Hyeon Bae dev...@devunt.kr wrote:
 How can I configure Mobile Skin?
 Mobile Skin is only one?

 Wikitech is for WMF stuff only.  You want Mediawiki-l instead.


Not true.  Wikitech is for MediaWiki development, which includes WMF 
infrastructure talk, as well as non-WMF-specific development talk.

In terms of the original question, this might or might not be the right 
list, depending on what was actually being asked (the question isn't at all 
clear).  mediawiki-l might be more appropriate if this is a 'how to use' 
question, or possibly a site-specific list if related to a specific site 
(e.g. enwiki), but if they are trying to hack the mobile skin, or create a 
new mobile skin, then this may well be the right place.

- Mark Clements (HappyDog) 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Visual impairment

2010-05-17 Thread Conrad Irwin
On 17 May 2010 20:05, Platonides platoni...@gmail.com wrote:
 Tim Starling wrote:
 Audio CAPTCHAs, like visual CAPTCHAs, are not accessible for all
 people and do not conform to W3C accessibility guidelines. What's
 more, they're easier to crack than visual CAPTCHAs due to their
 one-dimensional nature. This is especially true if you use a public
 source dictionary of spoken phrases, against which an FFT correlation
 can be run.

 Just as with image captchas, you'd need to introduce noise into it.

If you are working from known constituents, you can use
cross-correlation to ignore noise pretty effectively (I believe it's
what humans do). The choice then is either to make the noise sound
like the captcha's numbers (google's approach), which is very hard to
solve (at least I find it so), or to use ReCAPTCHAs vast database of
unknown sound files (with noise added to obscure the phonemes). The
human brain is capable of filling in completely obscured phonemes in
order to make the sentence make sense (assuming they speak the
language in question - another usability problem with these),
something that computers are not yet so good at.

It's likely to be much easier to improve the request an account from
a human process - which has inbuilt rate-limiting, a little bit of
turing test, and a nice splash of common sense that is so hard to
instill in an automated system. (Alternatively we could just implement
an insecure audio captcha, safe in the knowledge that no-one has
enough motivation to crack it - I imagine the implementation would
still take significant effort)

 I have been trying flite, and didn't find the synthesized text too
 understable by itself. :(


In which case a computer could probably solve them better than you :).

Conrad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Visual impairment

2010-05-17 Thread Tim Starling
On 18/05/10 05:05, Platonides wrote:
 Tim Starling wrote:
 Audio CAPTCHAs, like visual CAPTCHAs, are not accessible for all
 people and do not conform to W3C accessibility guidelines. What's
 more, they're easier to crack than visual CAPTCHAs due to their
 one-dimensional nature. This is especially true if you use a public
 source dictionary of spoken phrases, against which an FFT correlation
 can be run.
 
 Just as with image captchas, you'd need to introduce noise into it.
 
 I have been trying flite, and didn't find the synthesized text too
 understable by itself. :(

You have to introduce enough noise into it to defeat the computer, but
not so much as to defeat the human. I've done some experiments myself,
and I've read some articles on audio CAPTCHA design, and I'm not
convinced it's possible.

I've seen an open-source audio CAPTCHA that uses speech synthesis, it
seemed to be just designed as a deterrent. Like MathCaptcha, it'll
only work until someone could be bothered to crack it.

If you use a public dictionary of spoken phrases, then the task is
correlation, which is probably easier for computers than for humans.
If you use a secret dictionary, then the task is speech recognition,
which is more difficult. But if the dictionary is too small, then it's
vulnerable to reduction to correlation, using precomputed or
human-solved phrases.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] js2 extensions / Update ( add-media-wizard, uploadWizard, timed media player )

2010-05-17 Thread Michael Dale
The script-loader has a few modes of operation.

You can run it in raw file mode ( ie $wgEnableScriptLoader = false ). 
This will load all your javascript files directly only doing php 
requests to get the messages. In this mode php does not touch any js or 
css file your developing.

Once ready for production you can enable the script-loader it groups, 
localizes, removes debug statements, transforms css url paths, minifies 
the set of javascript / css. It includes experimental support for google 
closure compiler which does much more aggressive transformations.

I think your misunderstanding the point of the script-loader.  Existing 
extensions used on wikipedia already do a static package and minify 
javascript code.

If you want to have remote user communicate javascript debugging info, 
we could add a url flag to avoid minification and or avoid 
script-grouping.  Maybe be useful in the case of user-scripts / gadgets.

But in general its probably better / easier for end users to just 
identify their platform and whats not working, since its all code to 
them anyway. If they are a developer or are going to do something 
productive with what they are seeking they likely have the code checked 
out locally and use the debug mode.

--michael


Aryeh Gregor wrote:
 On Mon, May 17, 2010 at 6:43 PM, Maciej Jaros e...@wp.pl wrote:
   
 So does this extensions encrypt JS files into being non-debugable? I
 could understand that on sites like Facebook but on an open or even Open
 site like Wikipedia/Mediawiki? This just seems to be wrong. Simple
 concatenation of files would serve the same purpose in terms of requests
 to the server.
 

 At the very least, newlines should be preserved, so you can get a line
 number when an error occurs.  Stripping other whitespace and comments
 is probably actually be worth the performance gain, from what I've
 heard, annoying though it may occasionally be.  Stripping newlines is
 surely not worth the added debugging pain, on the other hand.
 (Couldn't you even make up for it by stripping semicolons?)

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
   


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l