Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-18 Thread Dan Nessett
On Fri, 30 Dec 2011 20:11:30 +, Dan Nessett wrote:

 I have poked around a bit (using Google), but have not found
 instructions for setting up the MW regression test framework (e.g.,
 CruiseControl or Jenkins or whatever is now being used + PHPUnit tests +
 Selenium tests) on a local machine (so new code can be regression tested
 before submitting patches to Bugzilla). Do such instructions exist and
 if so, would someone provide a pointer to them?
 
 Thanks,

The regression tests are failing badly on my local development machine. I 
am trying to figure out why and have proposed one possibility (https://
bugzilla.wikimedia.org/show_bug.cgi?id=33663). If anyone with the 
necessary understanding of the test suite would comment, that would help 
chasing down the problem.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] postgreSQL testing

2012-01-13 Thread Dan Nessett
On Thu, 12 Jan 2012 16:17:00 +0100, Antoine Musso wrote:

 Hello,
 
 I have added a new continuous integration job to check our postgres
 support.
 This is exactly the same job as MediaWiki-phpunit, only the database
 backend
 change.
 
 The link is:
https://integration.mediawiki.org/ci/job/MediaWiki-postgres-phpunit/
 
 As of now, there are two tests failing.

I ran the JS tests for the following configuration and got 8 failures. 
How should I report the problems (e.g., dump the html into a file and 
attach it to a bug report?)

Configuration:

OS: CentOS 5.7
MW revision: 108821
PHP: 5.3.3
PHPUnit: 3.6.7
DB: Postgres 8.3.9

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] postgreSQL testing

2012-01-12 Thread Dan Nessett
On Thu, 12 Jan 2012 16:17:00 +0100, Antoine Musso wrote:

 Hello,
 
 I have added a new continuous integration job to check our postgres
 support.
 This is exactly the same job as MediaWiki-phpunit, only the database
 backend
 change.
 
 The link is:
https://integration.mediawiki.org/ci/job/MediaWiki-postgres-phpunit/
 
 As of now, there are two tests failing.

Excellent. Thank you for this.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] postgreSQL testing

2012-01-12 Thread Dan Nessett
On Thu, 12 Jan 2012 16:17:00 +0100, Antoine Musso wrote:

 Hello,
 
 I have added a new continuous integration job to check our postgres
 support.
 This is exactly the same job as MediaWiki-phpunit, only the database
 backend
 change.
 
 The link is:
https://integration.mediawiki.org/ci/job/MediaWiki-postgres-phpunit/
 
 As of now, there are two tests failing.

While I am grateful for the inclusion of a postgres backend in the 
integration tests, I just ran make safe on MW 108734 and got 1 error, 25 
failures and 12 incomplete tests. Any idea why the local run has 
different results that the automated run?

I have attached the most recent run output to bug 33663.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] postgreSQL testing

2012-01-12 Thread Dan Nessett
On Thu, 12 Jan 2012 17:59:09 +, Dan Nessett wrote:

 On Thu, 12 Jan 2012 16:17:00 +0100, Antoine Musso wrote:
 
 Hello,
 
 I have added a new continuous integration job to check our postgres
 support.
 This is exactly the same job as MediaWiki-phpunit, only the database
 backend
 change.
 
 The link is:
https://integration.mediawiki.org/ci/job/MediaWiki-postgres-phpunit/
 
 As of now, there are two tests failing.
 
 While I am grateful for the inclusion of a postgres backend in the
 integration tests, I just ran make safe on MW 108734 and got 1 error, 25
 failures and 12 incomplete tests. Any idea why the local run has
 different results that the automated run?
 
 I have attached the most recent run output to bug 33663.

Sorry, my mistake. I forgot to run update. After doing so, I also get 
only 2 failures.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-11 Thread Dan Nessett
On Wed, 11 Jan 2012 11:17:33 +0100, Antoine Musso wrote:

 Le Wed, 11 Jan 2012 02:31:47 +0100, Dan Nessett dness...@yahoo.com a
 écrit:
 Sure, I can post the results, but I don't think I should just dump them
 into this list (there are over 700 lines of output). How would you like
 me to go about it?
 
 You could open a bug report on https://bugzilla.wikimedia.org/ and
 attach the output.
 Or feel free to send it to Platonides and me, I can create the bug
 report for you.

I have created a bug report (https://bugzilla.wikimedia.org/show_bug.cgi?
id=33663) and attached the output to it.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-10 Thread Dan Nessett
On Fri, 30 Dec 2011 20:11:30 +, Dan Nessett wrote:

 I have poked around a bit (using Google), but have not found
 instructions for setting up the MW regression test framework (e.g.,
 CruiseControl or Jenkins or whatever is now being used + PHPUnit tests +
 Selenium tests) on a local machine (so new code can be regression tested
 before submitting patches to Bugzilla). Do such instructions exist and
 if so, would someone provide a pointer to them?
 
 Thanks,

I gave up on Ubuntu 8.04 and moved to Centos 5.7. After getting make safe 
to work, I get 27 failures and 14 incomplete tests. This is for revision 
108474. Is there any way to know if this is expected? For example, are 
the results of nightly runs posted anywhere?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-10 Thread Dan Nessett
On Tue, 10 Jan 2012 23:53:25 +0100, Platonides wrote:

 On 10/01/12 19:52, Dan Nessett wrote:
 I gave up on Ubuntu 8.04 and moved to Centos 5.7. After getting make
 safe to work, I get 27 failures and 14 incomplete tests. This is for
 revision 108474. Is there any way to know if this is expected? For
 example, are the results of nightly runs posted anywhere?
 
 It's not.
 
 Although it may be a configuration issue (eg. our script is not setting
 $wgEnableUploads = true, but is assuming it is)
 
 Can you post the results?

Sure, I can post the results, but I don't think I should just dump them 
into this list (there are over 700 lines of output). How would you like 
me to go about it?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-09 Thread Dan Nessett
On Mon, 09 Jan 2012 09:26:24 +0100, Krinkle wrote:

 On Fri, Jan 6, 2012 at 8:06 PM, OQ overlo...@gmail.com wrote:
 
 On Fri, Jan 6, 2012 at 12:56 PM, Dan Nessett dness...@yahoo.com
 wrote:
  On Thu, 05 Jan 2012 14:03:14 -0600, OQ wrote:
  uninstall the pear version and do a make install.
 
  Didn't work.
 
  # make install
  ./install-phpunit.sh
  Installing phpunit with pear
  Channel pear.phpunit.de is already initialized Adding Channel
  components.ez.no succeeded Discovery of channel components.ez.no
  succeeded Channel pear.symfony-project.com is already initialized
  Did not download optional dependencies: pear/Image_GraphViz,
  pear/Log, symfony/YAML, use --alldeps to download automatically
  phpunit/PHPUnit can optionally use package pear/Image_GraphViz
  (version
 = 1.2.1)
  phpunit/PHPUnit can optionally use package pear/Log phpunit/PHPUnit
  can optionally use package symfony/YAML (version = 1.0.2)
  downloading PHPUnit-3.4.15.tgz ...
  Starting to download PHPUnit-3.4.15.tgz (255,036 bytes)
  .done: 255,036
  bytes install ok: channel://pear.phpunit.de/PHPUnit-3.4.15

 Dunno then, it installed 3.6.3 for me. Hopefully somebody here knows a
 bit more about pear :)


 I remember having a similar issue when I first installed phpunit on my
 Mac. Although I don't know the exact command, I remember having to
 update some central installer proces by PEAR (channel ?) to the latest
 version, which still had an old
 version in it's (local?) database.
 
 Krinkle

So this thread is complete, Ubuntu 8.04 is pinned to PHP 5.2.4 through 
the standard distribution channels. PHPUnit 5.3 requires at least PHP 
5.2.7. I looked around and could find no simple way to upgrade to PHP 
5.2.x, x = 7 on Ubuntu 8.04. I could have downloaded and tried to 
upgrade manually, but frankly I didn't want to handle all of the version 
incompatibily insanity that would entail.

For the record, here is an explanation how to update the pear channels 
for PHPUnit 5.3 - http://sebastian-bergmann.de/archives/897-
PHPUnit-3.5.html

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-06 Thread Dan Nessett
On Thu, 05 Jan 2012 14:03:14 -0600, OQ wrote:

 On Thu, Jan 5, 2012 at 1:34 PM, Dan Nessett dness...@yahoo.com wrote:
 So, I upgraded PHPUnit (using pear upgrade phpunit/PHPUnit). This
 installed 3.4.15 (not 3.5).

 I am running on Ubuntu 8.04. Anyone have an idea how to get PHPUnit 3.5
 installed?


 uninstall the pear version and do a make install.

Didn't work.

# make install
./install-phpunit.sh
Installing phpunit with pear
Channel pear.phpunit.de is already initialized
Adding Channel components.ez.no succeeded
Discovery of channel components.ez.no succeeded
Channel pear.symfony-project.com is already initialized
Did not download optional dependencies: pear/Image_GraphViz, pear/Log, 
symfony/YAML, use --alldeps to download automatically
phpunit/PHPUnit can optionally use package pear/Image_GraphViz (version 
= 1.2.1)
phpunit/PHPUnit can optionally use package pear/Log
phpunit/PHPUnit can optionally use package symfony/YAML (version = 
1.0.2)
downloading PHPUnit-3.4.15.tgz ...
Starting to download PHPUnit-3.4.15.tgz (255,036 bytes)
.done: 255,036 bytes
install ok: channel://pear.phpunit.de/PHPUnit-3.4.15


-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-05 Thread Dan Nessett
On Fri, 30 Dec 2011 21:29:53 +0100, Roan Kattouw wrote:

 On Fri, Dec 30, 2011 at 9:11 PM, Dan Nessett dness...@yahoo.com wrote:
 I have poked around a bit (using Google), but have not found
 instructions for setting up the MW regression test framework (e.g.,
 CruiseControl or Jenkins or whatever is now being used + PHPUnit tests
 + Selenium tests) on a local machine (so new code can be regression
 tested before submitting patches to Bugzilla). Do such instructions
 exist and if so, would someone provide a pointer to them?

 Jenkins is only really used to run the tests automatically when someone
 commits. You ran run the PHPUnit tests locally without Jenkins.
 Instructions on installing PHPUnit and running the tests is at
 https://www.mediawiki.org/wiki/Manual:PHP_unit_testing .
 
 I don't have URLs for you offhand, but QUnit and Selenium are probably
 also documented on mediawiki.org .
 
 Roan

OK, I downloaded latest trunk and tried to run the PHPUnit tests. I 
executed make safe and got the following result:

php phpunit.php --configuration /usr/local/src/mediawiki/latest_trunk/
trunk/phase3/tests/phpunit/suite.xml  --exclude-group 
Broken,Destructive,Stub
PHPUnit 3.5 or later required, you have 3.4.12

So, I upgraded PHPUnit (using pear upgrade phpunit/PHPUnit). This 
installed 3.4.15 (not 3.5).

I am running on Ubuntu 8.04. Anyone have an idea how to get PHPUnit 3.5 
installed?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Instructions for setting up regression tests on local machine?

2011-12-30 Thread Dan Nessett
I have poked around a bit (using Google), but have not found instructions 
for setting up the MW regression test framework (e.g., CruiseControl or 
Jenkins or whatever is now being used + PHPUnit tests + Selenium tests) 
on a local machine (so new code can be regression tested before 
submitting patches to Bugzilla). Do such instructions exist and if so, 
would someone provide a pointer to them?

Thanks,

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-07 Thread Dan Nessett
On Wed, 07 Dec 2011 12:54:22 +1100, Tim Starling wrote:

 On 07/12/11 12:34, Dan Nessett wrote:
 On Wed, 07 Dec 2011 12:15:41 +1100, Tim Starling wrote:
 How many servers do you have?
 
 3. It would help to get it down to 2.
 
 I assume my comments apply to many other small wikis that use MW as
 well. Most operate on a shoe string budget.
 
 You should try running MediaWiki on HipHop. See
 
 http://www.mediawiki.org/wiki/HipHop
 
 It's not possible to pay developers to rewrite MediaWiki for less than
 what it would cost to buy a server. But maybe getting a particular MW
 installation to run on HipHop with a reduced feature set would be in the
 same order of magnitude of cost.
 
 -- Tim Starling

Are there any production wikis running MW over HipHop?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Mediawiki 2.0

2011-12-06 Thread Dan Nessett
This is a (admittedly long and elaborate) question, not a proposal. I ask 
it in order to learn whether anyone has given it or something like it 
some thought.

Has anyone thought of creating MW 2.0? I mean by this, completely 
rewriting the application in a way that may make it incompatible with MW 
1.x.y.

Pros


* Improving the application architecture
* Utilizing more client side resources, thereby reducing the server side 
resource requirements.
* Clean up and improve existing services.

Cons


* Rewrites of major applications normally fail because they become overly 
ambitious.

Some possible ways MW 2.0 might improve MW 1.x.y

Change the parser
-

* Get rid of mediawiki markup and move to html with embedded macros that 
are processed client side.
* Move extension processing client side.
* Replace templates with a cleaner macro-based language (but, KISS).

Pros


* Reduce server side resource requirements, thereby reducing server side 
costs. Server side becomes mostly database manipulation.
* Make use of the far larger aggregate resources available on client side 
(many more client machines than server machines).
* With macro processing client side, debates about enhancements to parser 
extensions that require more processing shift to looking at client side.
* Allows development of a parser driven by well-defined grammar.

Cons


* Unclear whether it is possible to move all or most parser processing to 
client side.
* Would need a (probably large and complex) transition application that 
translates mediawiki markup into new grammar.
* Since not all clients may have the resources to expand macros and do 
other client side processing in a timely manner, may need to provide 
server side surrogate processing based on either user selectable (e.g., 
preferences) choice or automatic discovery.
* Need to select client side processing engine (e.g., Javascript, Java), 
which may lead to major developer fighting.

Clean up security architecture
--

* Support per page permissions, ala' Unix file system model.
* Integrate authentication with emerging global services (e.g., OpenID) 
without use of extensions.
* Move group membership definition out of LocalSettings into database 
table.

Pros


* Chance to think through security requirements and craft clean solution.
* Offload most authentication processing and login data protection to 
service providers that more sharply focus on its requirements.
* Some customers have expressed interest in per page permissions.

Cons


* Changing security architectures is a notoriously difficult objective. 
Most attempts lead to bloated solutions that never work in practice.
* Some developers oppose per page permissions.
* Would need to develop WMF standards that authentication providers must 
meet before accepting them for WMF project login.

This is sufficient to illustrate the direction of my curiosity, but there 
are other things that MW 2.0 could do that might be discussed, such as:

* Change the page history model. When page is flagged stable, subsequent 
page changes occur to new draft page. Provide link to draft page on 
stable page.
* Think through how to support multiple db backends so application 
development doesn't continually break this support.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-06 Thread Dan Nessett
On Wed, 07 Dec 2011 07:59:26 +1000, K. Peachey wrote:

 http://www.mediawiki.org/wiki/Project:2.0

Thanks. I have moved my comments to that page's discussion.



-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-06 Thread Dan Nessett
On Wed, 07 Dec 2011 09:26:50 +1100, Tim Starling wrote:

 On 07/12/11 08:55, Dan Nessett wrote:
 This is a (admittedly long and elaborate) question, not a proposal. I
 ask it in order to learn whether anyone has given it or something like
 it some thought.
 
 Has anyone thought of creating MW 2.0? I mean by this, completely
 rewriting the application in a way that may make it incompatible with
 MW 1.x.y.
 [...]
 
 * Get rid of mediawiki markup and move to html with embedded macros
 that are processed client side.
 * Move extension processing client side. * Replace templates with a
 cleaner macro-based language (but, KISS).
 
 Keeping the same name (MediaWiki) implies some level of compatibility
 with older versions of the same software. If you abandon existing
 installations and their needs altogether, then it makes sense to choose
 a new project name, so that the 1.x code can continue to be maintained
 and improved without causing user confusion.
 
 I think MediaWiki 2.0 should just be a renumbering, like Linux 2.6 -
 3.0, rather than any kind of backwards compatibility break.
 
 -- Tim Starling

OK. Call it something else. The motivation for my question is getting 
server costs under control. Moving as much processing as possible client 
side is one way to achieve this. Cleaning up the security architecture 
may be overly ambitious, but a rewrite would provide the opportunity to 
take a rational look at MW's vulnerabilities and security services.

I don't know where WMF is on the cost question, but we would benefit from 
reducing our hosting costs.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-06 Thread dan nessett
I assume the CC to me on the message below was a courtesy. However, I would 
like to comment on one point in Neil's message. I was not thinking about an 
application rewrite that allows projects to jettison all existing wiki data. 
That would make no sense for us. We would need a way to convert the wiki data 
from the old format (i.e., mediawiki markup) to the new.

Dan Nessett



 From: Neil Kandalgaonkar ne...@wikimedia.org
To: Wikimedia developers wikitech-l@lists.wikimedia.org 
Cc: Dan Nessett dness...@yahoo.com 
Sent: Tuesday, December 6, 2011 4:18 PM
Subject: Re: [Wikitech-l] Mediawiki 2.0
 
On 12/6/11 1:55 PM, Dan Nessett wrote:
 This is a (admittedly long and elaborate) question, not a proposal. I ask
 it in order to learn whether anyone has given it or something like it
 some thought.
 
 Has anyone thought of creating MW 2.0? I mean by this, completely
 rewriting the application in a way that may make it incompatible with MW
 1.x.y.

In that case why not use some other wiki software? There are quite a few. See 
//http://www.wikimatrix.org/.

If you're willing to jettison all existing wiki data I'm not sure I would 
recommend MediaWiki for a fresh start. I would in many cases, but not all.

In any case, the WMF already have people working on a new parser, and a new GUI 
editor. If both of those projects reach fruition I would call that 2.0-worthy 
right there. Also, the new parser should provide us with some means to 
transition to a different wiki syntax if we think it's a good idea.

If we wanted to *really* get radical, then we'd think about changing the 
storage model too, but you might as well rename it by that point.

-- Neil Kandalgaonkar  |) ne...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-06 Thread Dan Nessett
On Wed, 07 Dec 2011 12:15:41 +1100, Tim Starling wrote:

 On 07/12/11 09:50, Dan Nessett wrote:
 OK. Call it something else. The motivation for my question is getting
 server costs under control. Moving as much processing as possible
 client side is one way to achieve this. Cleaning up the security
 architecture may be overly ambitious, but a rewrite would provide the
 opportunity to take a rational look at MW's vulnerabilities and
 security services.
 
 I don't know where WMF is on the cost question, but we would benefit
 from reducing our hosting costs.
 
 WMF's hosting costs are pretty well under control. We have two parser
 CPU reduction features on our roadmap, but the main justification for
 doing them is to reduce latency, rather than cost, thus providing a
 better user experience. If both of them are fully utilised, we can
 probably reduce average parse time by a factor of 10.
 
 By we do you mean Citizendium?

Yes.

 How many servers do you have?

3. It would help to get it down to 2.

I assume my comments apply to many other small wikis that use MW as well. 
Most operate on a shoe string budget.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Rules for text in languages/messages/Messages***.php

2011-04-05 Thread Dan Nessett
I recently fixed a bug (14901) and attached some patches to the bug 
ticket. Included were some changes to languages/messages/MessagesEn.php. 
Following the directions given at http://www.mediawiki.org/wiki/
Localisation#Changing_existing_messages, I contacted #mediawiki-i18n and 
asked them for instructions how to proceed with the internationalization 
part of the work.

After looking over my changes, they pointed out that my new message text 
violated some requirements, specifically:

+ I had used /n rather than literal newlines in the text
+ I had put newlines in front of some messages
+ I had put trailing whitespace in some messages

Since I was (and am) unfamiliar with MW internationalization workflow and 
since this text is destined for email messages, not display on a wiki 
page, I was puzzled by these requirements. However, it was explained to 
me that internationalization occurs by display of the text on 
translate.net, which means the text must conform to web page display 
rules. Consequently, the problems specified above interfere with the 
translation work.

I understand I need to fix these problems. However, before proceeding I 
wonder if there are other constraints that I must observe. I have read 
the material at http://www.mediawiki.org/wiki/Localisation, but did not 
find the above constraints mentioned there. Is there another place where 
they are specified?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin specific logos

2011-02-04 Thread Dan Nessett
On Fri, 04 Feb 2011 15:01:20 +0100, Krinkle wrote:

 Op 3 feb 2011, om 22:42 heeft Dan Nessett het volgende geschreven:
 
 On Thu, 03 Feb 2011 21:19:58 +, Dan Nessett wrote:

 Our site has 4 skins that display the logo - 3 standard and 1 site-
 specific. The site-specific skin uses rounded edges for the individual
 page area frames, while the standard skins use square edges. This
 means
 a logo with square edges looks fine for the standard skins, but not
 for
 the site-specific skin. A logo with rounded edges has the opposite
 characteristic.

 The programmer who designed the site-specific skin solved this problem
 with a hack. The absolute url to a different logo with rounded edges
 is
 hardwired into the skin code. Therefore, if we want to reorganize
 where
 we keep the site logos (which we have done once already), we have to
 modify the site-specific skin code.

 While it is possible that no one else has this problem, I would
 imagine
 there are skins out there that would look better if they were able to
 use a skin specific logo (e.g., using a different color scheme or a
 different font).

 My question is: has this issue been addressed before? If so, and there
 is a good solution, I would appreciate hearing of it.

 Regards,

 I need to correct a mistake I made in this post (sorry for replying to
 my
 own question). The site-specific skin keeps its skin specific logo in
 the
 skin directory and the skin code uses ?php $this-
 text('stylepath') ?/?
 php $this-text('stylename') ? to get to that directory. So, the url
 is
 not hardwired.

 However, we would like to keep all of the logos in one place, so I
 think
 the question is still pertinent.

 --
 -- Dan Nessett


 You could upload a file to your wiki (say, at [[File:Wiki.png]]) and
 then put the
 full path to that in LocalSettings.php in $wgLogo.
 
 ie.
 $wgLogo = /w/images/b/bc/Wiki.png;
 
 And then use that variable in the skins (The default skins use it
 already)

That's an interesting idea. But, probably the definition should be (e.g.):

$wgLogo = $wgUploadPath/b/bc/Wiki.png;

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Skin specific logos

2011-02-03 Thread Dan Nessett
Our site has 4 skins that display the logo - 3 standard and 1 site-
specific. The site-specific skin uses rounded edges for the individual 
page area frames, while the standard skins use square edges. This means a 
logo with square edges looks fine for the standard skins, but not for the 
site-specific skin. A logo with rounded edges has the opposite 
characteristic.

The programmer who designed the site-specific skin solved this problem 
with a hack. The absolute url to a different logo with rounded edges is 
hardwired into the skin code. Therefore, if we want to reorganize where 
we keep the site logos (which we have done once already), we have to 
modify the site-specific skin code.

While it is possible that no one else has this problem, I would imagine 
there are skins out there that would look better if they were able to use 
a skin specific logo (e.g., using a different color scheme or a different 
font).

My question is: has this issue been addressed before? If so, and there is 
a good solution, I would appreciate hearing of it.

Regards,

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin specific logos

2011-02-03 Thread Dan Nessett
On Thu, 03 Feb 2011 21:19:58 +, Dan Nessett wrote:

 Our site has 4 skins that display the logo - 3 standard and 1 site-
 specific. The site-specific skin uses rounded edges for the individual
 page area frames, while the standard skins use square edges. This means
 a logo with square edges looks fine for the standard skins, but not for
 the site-specific skin. A logo with rounded edges has the opposite
 characteristic.
 
 The programmer who designed the site-specific skin solved this problem
 with a hack. The absolute url to a different logo with rounded edges is
 hardwired into the skin code. Therefore, if we want to reorganize where
 we keep the site logos (which we have done once already), we have to
 modify the site-specific skin code.
 
 While it is possible that no one else has this problem, I would imagine
 there are skins out there that would look better if they were able to
 use a skin specific logo (e.g., using a different color scheme or a
 different font).
 
 My question is: has this issue been addressed before? If so, and there
 is a good solution, I would appreciate hearing of it.
 
 Regards,

I need to correct a mistake I made in this post (sorry for replying to my 
own question). The site-specific skin keeps its skin specific logo in the 
skin directory and the skin code uses ?php $this-text('stylepath') ?/?
php $this-text('stylename') ? to get to that directory. So, the url is 
not hardwired.

However, we would like to keep all of the logos in one place, so I think 
the question is still pertinent.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin specific logos

2011-02-03 Thread Dan Nessett
On Thu, 03 Feb 2011 13:52:30 -0800, Brion Vibber wrote:

 On Thu, Feb 3, 2011 at 1:19 PM, Dan Nessett dness...@yahoo.com wrote:
 
 Our site has 4 skins that display the logo - 3 standard and 1 site-
 specific. The site-specific skin uses rounded edges for the individual
 page area frames, while the standard skins use square edges. This means
 a logo with square edges looks fine for the standard skins, but not for
 the site-specific skin. A logo with rounded edges has the opposite
 characteristic.

 The programmer who designed the site-specific skin solved this problem
 with a hack. The absolute url to a different logo with rounded edges is
 hardwired into the skin code. Therefore, if we want to reorganize where
 we keep the site logos (which we have done once already), we have to
 modify the site-specific skin code.

 While it is possible that no one else has this problem, I would imagine
 there are skins out there that would look better if they were able to
 use a skin specific logo (e.g., using a different color scheme or a
 different font).

 My question is: has this issue been addressed before? If so, and there
 is a good solution, I would appreciate hearing of it.


 A couple ideas off the top of my head:
 
 * You could use CSS to apply rounded corners with border-radius and its
 -vendor-* variants. (May not work on all browsers, but requires no
 upkeep other than double-checking that the rounded variant still looks
 good. Doesn't help with related issues like an alternate color scheme
 for the logo in different skins.)
 * Your custom skin could use a custom configuration variable, say
 $wgAwesomeSkinLogo. Have it use this instead of the default logo, and
 make sure both settings get updated together. * You could use a fixed
 alternate path which can be determined by modifying the string in
 $wgLogo. Be sure to always store and update the second logo image
 correctly.
 * You could create a script that applies rounded corners or changes
 colors in an existing image file and saves a new one, then find some way
 to help automate your process of creating alternate logo images in the
 above.
 
 -- brion

Thanks. I think the second idea works best for us. It also suggests the 
use of a global $wgSkinLogos that points to a directory where all of the 
skin logos are kept. Any reason why this is a bad idea?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Wed, 22 Sep 2010 12:30:35 -0700, Brion Vibber wrote:

 On Wed, Sep 22, 2010 at 11:09 AM, Dan Nessett dness...@yahoo.com
 wrote:
 
 Some have mentioned the possibility of using the wiki family logic to
 help achieve these objectives. Do you have any thoughts on this? If you
 think it is a good idea, how do we find out more about it?


 I'd just treat it same as any other wiki. Whether you're running one or
 multiple wiki instances out of one copy of the code base, it just
 doesn't make a difference here.
 
 -- brion

Not to hammer this point to hard, but there is another reason for 
supporting switching in of temporary resources.

We have one worked example of a Selenium test - PagedTiffHandler. During 
the course of its execution, it uploads some tiff files to test some of 
its functionality. Currently, once an image is uploded, there is no way 
to completely delete it (short of drastic administrative action involving 
raw database manipulation). So, running PagedTiffHandler twice doesn't 
work unless you switch in temporary resources on each run that are then 
deleted at the end of the run.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 09:24:18 -0700, Brion Vibber wrote:

 Given a test matrix with multiple OSes, this ain't something individual
 devs will be running in full over and over as they work. Assume
 automated batch runs, which can be distributed over as many databases
 and clients as you like.
 
 For small test subsets that are being used during testing the equation
 still doesn't change much: reset the wiki to known state, run the tests.
 Keep it simple!
 
 -- brion
 
 
 On Thursday, September 23, 2010, Dan Nessett dness...@yahoo.com wrote:
 On Wed, 22 Sep 2010 12:30:35 -0700, Brion Vibber wrote:

 On Wed, Sep 22, 2010 at 11:09 AM, Dan Nessett dness...@yahoo.com
 wrote:

 Some have mentioned the possibility of using the wiki family logic to
 help achieve these objectives. Do you have any thoughts on this? If
 you think it is a good idea, how do we find out more about it?


 I'd just treat it same as any other wiki. Whether you're running one
 or multiple wiki instances out of one copy of the code base, it just
 doesn't make a difference here.

 -- brion

 Not to hammer this point to hard, but there is another reason for
 supporting switching in of temporary resources.

 We have one worked example of a Selenium test - PagedTiffHandler.
 During the course of its execution, it uploads some tiff files to test
 some of its functionality. Currently, once an image is uploded, there
 is no way to completely delete it (short of drastic administrative
 action involving raw database manipulation). So, running
 PagedTiffHandler twice doesn't work unless you switch in temporary
 resources on each run that are then deleted at the end of the run.

 --
 -- Dan Nessett


 ___ Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


I am very much in favor of keeping it simple. I think the issue is 
whether we will support more than one regression test (or individual test 
associated with a regression test) running concurrently on the same test 
wiki. If not, then I agree, no switching logic is necessary.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 10:29:58 -0700, Brion Vibber wrote:

 On Thu, Sep 23, 2010 at 9:46 AM, Dan Nessett dness...@yahoo.com wrote:
 
 I am very much in favor of keeping it simple. I think the issue is
 whether we will support more than one regression test (or individual
 test associated with a regression test) running concurrently on the
 same test wiki. If not, then I agree, no switching logic is necessary.


 *nod*
 
 It might be a good idea to divide the test set up into, say 'channels'
 or 'bundles' which are independent of each other, but whose individual
 steps must run in sequence. If the tests are designed well, you should
 be able to run tests from multiple 'channels' on the same wiki
 simultaneously -- just as in the real world, multiple users are doing
 multiple things on your wiki at the same time.
 
 So one test set might be:
 * create page A-{{unique-id}} as user One-{{unique-id}} * open editing
 page as user One-{{unique-id}} * open and save the page as user
 Two-{{unique-id}} * save the page as user One-{{unique-id}} * confirm
 edit conflict / merging behavior was as expected
 
 And another might be:
 * register a new user account User--{{unique-id}} * change skin option
 in preferences
 * confirm that the skin changed as expected
 
 These tests don't interfere with each other -- indeed if they did, that
 would be information you'd need to know about a serious bug!
 
 Most test sets should be fairly separate like this; only some that
 change global state (say, a site administrator using a global
 configuration panel to change the default skin) would need to be run
 separately.
 
 -- brion

After thinking about this some more I think you are right. We should at 
least start with something simple and only make it more complex (e.g., 
wiki resource switching) if the simple approach has significant problems.

There is already a way to 'bundle' individual tests together. It is the 
selenium test suite. We could use that. We could then break up a 
regression test into separate test suites that could run concurrently.

Summarizing, here is my understanding of your proposal:

+ A regression test run comprises a set of test suites, each of which may 
run concurrently.

+ If you want to run multiple regression tests concurrently, use 
different test wikis (which can run over the same code base, but which 
are identified by different URLs - i.e., rely on httpd to multiplex 
multiple concurrent regression tests).

+ If you want to run parts of a regression test concurrently, the unit of 
concurrency is the test suite.

+ A regression test begins by establishing a fresh wiki. Each test suite 
starts by establishing the wiki state it requires (e.g., for 
PagedTiffHanlder, loading Multipage.tiff).

+ It is an open question whether test suite or total regression test 
cleanup is necessary. It may be possible to elide this step and simply 
rely on regression test initialization to cleanup any wiki state left 
around by a previous test run.

There are still some open questions:

+ How do you establish a fresh wiki for a URL used previously for a test 
run?

+ URLs identify test wikis. Only one regression test can run at time on 
any one of these. How do you synchronize regression test initiation so 
there is some sort of lock on a test wiki currently running a regression 
test?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 14:10:24 -0700, Brion Vibber wrote:
 
 + URLs identify test wikis. Only one regression test can run at time on
 any one of these. How do you synchronize regression test initiation so
 there is some sort of lock on a test wiki currently running a
 regression test?


 Simplest way would be to have one wiki (database + URL) for each
 regression test (test1234wiki), or even for each run of each
 regression test (test1234run432wiki).
 
 These could be created/removed as needed through simple shell scripting.
 
 -- brion

Not sure I get this. Here is what I understand would happen when a 
developer checks in a revision:

+ A script runs that manages the various regression tests run on the 
revision (e.g., parserTests, PHPUnit tests, the Selenium-based regression 
test).

+ The Selenium regression test needs a URL to work with. There are a 
fixed set of these defined in httpd.conf.

+ Before assigning one of these to the regression test run, there is a 
requirement that it isn't currently busy running a regression test for a 
different revision. So, you need resource access control on the URLs.

+ Once you have an idle URL, you can initialize the wiki per your 
previous comments, including loading the revision into the directory 
associated with the URL.

How does this fit into the idea of using a wiki per regression test or 
regression test run?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 14:41:32 -0700, Brion Vibber wrote:

 On Thu, Sep 23, 2010 at 2:31 PM, Dan Nessett dness...@yahoo.com wrote:
 
 Not sure I get this. Here is what I understand would happen when a
 developer checks in a revision:

 + A script runs that manages the various regression tests run on the
 revision (e.g., parserTests, PHPUnit tests, the Selenium-based
 regression test).

 + The Selenium regression test needs a URL to work with. There are a
 fixed set of these defined in httpd.conf.


 There's no need to have a fixed set of URLs; just as with Wikimedia's
 public-hosted sites you can add individually-addressable wikis
 dynamically at whim without touching any Apache configuration. URL
 rewriting, or wildcard hostnames, or whatever lets you make as many
 distinct URLs as you like, funnel them through a single web server and a
 single codebase, but have them running with different databases.
 -- brion

Are there instructions somewhere that describe how to do this?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 15:50:48 -0700, Brion Vibber wrote:

 On Thu, Sep 23, 2010 at 2:54 PM, Dan Nessett dness...@yahoo.com wrote:
 
 On Thu, 23 Sep 2010 14:41:32 -0700, Brion Vibber wrote:
  There's no need to have a fixed set of URLs; just as with Wikimedia's
  public-hosted sites you can add individually-addressable wikis
  dynamically at whim without touching any Apache configuration. URL
  rewriting, or wildcard hostnames, or whatever lets you make as many
  distinct URLs as you like, funnel them through a single web server
  and a single codebase, but have them running with different
  databases. -- brion

 Are there instructions somewhere that describe how to do this?


 http://www.mediawiki.org/wiki/Manual:Wiki_family#Wikimedia_Method
 
 -- brion

Thinking about this a bit, we seem to have come full circle. If we use a 
URL per regression test run, then we need to multiplex wiki resources. 
When you set up a wiki family, the resources are permanent. But, for a 
test run, you need to set them up, use them and then reclaim them. The 
resources are the db, the images directory, cache data, etc.

So, either we must use the fixed URL scheme I mentioned previously, or we 
are back to resource switching (admittedly using the approach already in 
place for the wikipedia family).

The test run can set up the resources and reclaim them, but we also need 
to handle test runs that fail in the middle (due to e.g. system OS 
crashes, power outages, etc.). The resources allocated for such runs will 
become orphans and some sort of garbage collection is required.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-23 Thread Dan Nessett
On Thu, 23 Sep 2010 20:13:23 -0700, Brion Vibber wrote:

 On Thu, Sep 23, 2010 at 7:19 PM, Dan Nessett dness...@yahoo.com wrote:
 
 I appreciate your recent help, so I am going to ignore the tone of your
 last message and focus on issues. While a test run can set up, use and
 then delete the temporary resources it needs (i.e., db, images
 directory, etc.), you really haven't answered the question I posed. If
 the test run ends abnormally, then it will not delete those resources.
 There has to be a way to garbage collect orphaned dbs, images
 directories and cache entries.


 Any introductory Unix sysadmin handbook will include examples of shell
 scripts to find old directories and remove them, etc. For that matter
 you could simply delete *all* the databases and files on the test
 machine every day before test runs start, and not spend even a second of
 effort worrying about cleaning up individual runs.
 
 Since each test database is a fresh slate, there is no shared state
 between runs -- there is *no* need to clean up immediately between runs
 or between test sets.
 
 
 My personal view is we should start out simple (as you originally
 suggested) with a set of fixed URLs that are used serially by test
 runs. Implementing this is probably the easiest option and would allow
 us to get something up and running quickly. This approach doesn't
 require significant development, although it does require a way to
 control access to the URLs so test runs don't step on each other.
 
 
 What you suggest is more difficult and harder to implement than creating
 a fresh database for each test run, and gives no clear benefit in
 exchange.
 
 Keep it simple by *not* implementing this idea of a fixed set of URLs
 which must be locked and multiplexed. Creating a fresh database 
 directory for each run does not require any additional development. It
 does not require devising any access control. It does not require
 devising a special way to clean up resources or restore state.
 
 -- brion

I am authentically sorry that you feel obliged to couch your last 2 
replies in an offensive manner. You have done some very good work on the 
Mediawiki code and deserve a great deal of credit for it.

It is clear you do not understand what I am proposing (that is, what I 
proposed after I accepted your suggestion to keep things simple):

+ Every test run does create a fresh database (and fresh images 
directory) for each run. It does this by first dropping the database 
associated with the last run, recursively deleting the phase3 directory 
holding the code from the previous run, checking out the revision for the 
current run (or if this is judged too expensive, we could hold the 
revisions in tar files and untar them into the directory), adjusting 
things so that the wiki will work (e.g., recursively chmoding the image 
directory so it is writable), and installing a LocalSettings file so 
things like imagemagick, texvc, etc. are locatable and global variables 
set appropriately. All of this is done in the directory associated with 
the fixed URL.

+ Before each test suite of a regression test runs it prepares the wiki. 
For example, if a prerequisite for the suite is the availability of an 
image, it uploads it before starting.

+ The regression test can be guarded by writing a lock file in the images 
directory (which protects all code and data directories). When the 
regression test completes, the lock file can be removed and the next 
regression test started. If there is only one test driver application 
running, the lock file is unnecessary. If the test driver application 
crashes for some reason, a simple utility can sweep the directory 
structures associated with the URLs and remove everything.

While this is only a sketch, it is obviously simpler than attempting to 
set up a test run using the wikipedia family scheme. The previous test 
run db, images directory, etc. are deleted at the beginning of the next 
run that uses the fixed URL and its associated directory space. There is 
no need to Configure your DNS with a wildcard A record, and apache with 
a server alias (like ServerAlias *.yourdomain.net), something that may 
not be possible for some sites. There is no need to dynamically edit 
LocalSettings to fix up the upload directory global variable.

As for cleaning up between runs, this simply ensures the database server 
doesn't become clogged with extraneous databases, that directory space is 
used efficiently and that memcached doesn't hold useless data. While it 
may be possible to get by with cleaning up every 24 hours, that the fresh 
wiki installation process cleans up these resources by default means such 
a global sweep is completely unnecessary.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-22 Thread Dan Nessett
 directory as well as other 
resources copied. The result could them be tar'd and uploaded as the base 
for the PagedTiffHandler test suite. This would relieve the test set up 
code of uploading multipage.tiff on each test run.

We may even consider pre-installing the base db so that the switch-in 
code can use db functions to clone it, rather than creating it from the 
base dump for each test run that uses it. The latter could take a 
significant amount of time, thereby severely slowing testing.

 After the test run completes, the testing application cleans up the
 test run by requesting the deletion of the temporary resources and the
 objectcache table entry associated with the test run.
 In some cases, tests will not change any data, e.g. testing dynamic skin
 elements in vector skin. Would it make sense not to tear down the
 testing environment in that case in order to save some time when testing
 repeatedly? I think, there is a conflict between performance and amount
 of data, but who wins?

There may be ways to make things more efficient by not immediately 
deleting the temporary resources. However, eventually we have to delete 
them. So, there is a question of where the temporary resources identifier 
is stored so it can be used later for a clean-up request. I was assuming 
the switch-in request occurs in test suite start-up and the clean-up 
request occurs in the test suite finishing code. But, we should consider 
alternative implementation strategies.

 In general, it seems to me that we have some similarity with what is
 called wiki family on mediawiki.org. One could see multiple testing
 environments as a set of multiple wikis that share a common codebase
 [1]. Does anybody have experience with wiki families and the object
 cache rsp. memcached?

I agree. (In fact, I mentioned this previously). No need to reinvent the 
wheel.

 I am not sure whether we can use the same codebase as parsertests. I'd
 rather think, parser tests are a special case of what we are sketching
 here. On the other hand, I don't think it is a good idea to have two
 separate approaches for very similar tasks in the code. Do you think it
 would be feasible to separate the preparation part from both parser
 tests and selenium tests and build both of them on a common ground?
 

The parserTests code creates some temporary tables in an existing wiki 
database. Currently, these tables are empty and prefixed with a static 
identifier. The requirements for parserTests and selenium tests are 
significantly different. While we may be able to learn from the 
parserTests code, we would have to change the parserTest code 
significantly in order to use it. I think it would actually be less work 
to start from scratch. Of course, as you mention, there may be code used 
to support wiki families that we could use without much modification.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-22 Thread Dan Nessett
On Wed, 22 Sep 2010 18:57:12 +0200, Bryan Tong Minh wrote:

 On Wed, Sep 22, 2010 at 6:47 PM, Dan Nessett dness...@yahoo.com wrote:

 I think the object cache and memcached are alternative ways of storing
 persistent data. (I also am not an expert in this, so I could be
 wrong). My understanding is memcached uses the memcached daemon
 (http:// memcached.org/), while the object cache uses the underlying
 database. If so, then memcached data disappears after a system crash or
 power outage, whereas object cache data should survive.


 There is no ObjectCache class. Basically, there is a common base class
 called BagOStuff and from that various classes for various backends are
 defined such as SqlBagOStuff and MemcacheBagOStuff. To the code outside
 that class, there is no visible difference.
 
 
 Bryan

Thanks for the clarification. I just looked at ObjectCache.php and it 
appears to provide a set of functions for accessing cache data of any 
type. Is this correct?

How does memcached fit into this? When I looked at BagOStuff, I didn't 
find a MemcacheBagOStuff class. Is it defined elsewhere?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-22 Thread Dan Nessett
 I just looked at SqlBagOStuff. It already has entry expiration logic.
 So, it seems perfect for use by the switch-in/clean-up functionality.

I spoke too soon. Expired entries are deleted automatically when any 
entry is referenced. Unfortunately, that means there is no opportunity to 
reclaim the temporary resources identified in the entry before its state 
is lost. This is a problem.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-22 Thread Dan Nessett
On Wed, 22 Sep 2010 11:00:53 -0700, Brion Vibber wrote:

 Hmm, I think you guys are overthinking the details on this; let's step
 back a level.
 
 When you're running tests, you have these tasks: * create a blank-slate
 wiki to run tests in * populate the empty wiki with known data * run
 tests on this wiki in a known state * clean up in some way
 
 The parserTests system is designed with some particular constraints: *
 run within the local MediaWiki test instance a developer already has,
 without altering its contents
 * run all tests in a single command-line program * expose no in-test
 data to the outside world
 
 It can do this very easily using temporary tables and a temporary
 directory because it only needs to work within that one single test
 process; once it's done, it's done and all the temporary data can be
 discarded. Nothing needs to be kept across processes or exposed to
 clients.
 
 For Selenium tests, you have a very different set of constraints: * test
 data must be exposed to web clients over a web server * test data must
 be retained across multiple requests
 
 The simplest way to accomplish this is to have a dedicated, web-exposed
 test wiki instance. A test run would go like this: * (re)initialize test
 wiki into known state * run series of tests
 
 I'd recommend not trying to use the parserTest harness/initialization
 for this; it'll be a *lot* simpler to just script creation of a fresh
 wiki. (drop database, create database, slurp tables, run update.php)
 
 Some non-destructive tests can always be run on any existing instance --
 and probably should be! -- and some 'active' tests will be freely
 runnable on existing instances that are used for development and
 testing, but if you want to work with a blank slate wiki exposed to web
 clients, keep things simple and just make a dedicated instance.
 
 -- brion
 
 
 On Wed, Sep 22, 2010 at 9:47 AM, Dan Nessett dness...@yahoo.com wrote:
 
 On Wed, 22 Sep 2010 15:49:40 +0200, Markus Glaser wrote:

  Hi,
 
  here are my thoughts about phpunit and selenium testing.
 
  The wiki under test is set up with a master database consisting of a
  single objectcache table. The entries of this table specify a test
  run identifier as primary key and temporary resource identifiers as
  dependent fields.
  If I understand this correctly, this would not allow to test any
  wikis that are running on live sites, e.g. intranet wikis. While I
  agree that regression testing on live sites is not a good idea, I
  kind of like the notion that after setting up a wiki with all the
  extensions I like to have, I could do some sort of everything up and
  running-test. With the concept of using separate testing databases
  and resources, this would be possible without interference with the
  actual data and could even be done at intervals during, say,
  maintenance periods.

 The problem with testing live sites is tests may alter wiki data
 (consequently, test run reproducibility becomes a problem). If all of
 the tests are read-only, then that isn't a problem, but it means
 developing a whole set of tests that conform to that constraint.

 Nevertheless, it wouldn't be hard to design the switching mechanism to
 allow the testing of live sites. There could be an option in test setup
 and cleanup that effectively says don't switch-in/clean-up temporary
 resources. Personally, I think use of this option is dangerous, but it
 wouldn't be hard to implement.

  Setup of a test run requires the creation of the test run temporary
  resources and a entry in the objectcache table.
  Are there already mechanisms for this? I haven't done too much work
  with the objectcache. This is where memcached data is stored, right?
  So how do I get the data that is needed? This question leads me to
  another one: How do I get the testind database and resources? As I
  see this, it should be part of the testing framework to be able to
  produce the set of data needed from a normal MW installation. The
  whole mechanism would actually be something like a backup, so we
  might look into any existing solutions for that.

 We should use the existing ObjectCache class to manage the object
 cache. However, if there exists some switching-in code, I doubt it has
 corresponding clean-up code. So, we probably need to do some
 development even if we use existing mechanisms.

 I think the object cache and memcached are alternative ways of storing
 persistent data. (I also am not an expert in this, so I could be
 wrong). My understanding is memcached uses the memcached daemon
 (http:// memcached.org/), while the object cache uses the underlying
 database. If so, then memcached data disappears after a system crash or
 power outage, whereas object cache data should survive.

 You are absolutely correct that we need to figure out how to clone a
 set of temporary resources (db, images directory, perhaps cache data)
 and set them up for use (i.e., so the switch-in logic can copy them for
 the test run

Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-22 Thread Dan Nessett
On Wed, 22 Sep 2010 12:30:35 -0700, Brion Vibber wrote:

 On Wed, Sep 22, 2010 at 11:09 AM, Dan Nessett dness...@yahoo.com
 wrote:
 
 Some have mentioned the possibility of using the wiki family logic to
 help achieve these objectives. Do you have any thoughts on this? If you
 think it is a good idea, how do we find out more about it?


 I'd just treat it same as any other wiki. Whether you're running one or
 multiple wiki instances out of one copy of the code base, it just
 doesn't make a difference here.
 
 -- brion

Oh. Well, perhaps we don't agree exactly 100%. As you suggest, let's step 
back a bit.

Once we get the selenium framework working I assume it will be used for a 
regression test. This will comprise a set of individual tests. Generally, 
these tests will write into the wiki db (some may not, but many will). To 
ensure test reproducibility, the state of the wiki should be the same 
each time one of these individual tests runs.

But, there is a problem. With parserTests, each individual test runs 
serially. That is fine for parserTests, since (I just ran this on my 
machine) while there are 610 individual tests, each takes about .08 
seconds to run (on average). So, on my machine the whole parserTest takes 
about 48 seconds.

Selenium tests are far more heavy-weight. A rough ball-park figure is 
each takes about 10 seconds to run (this does not include the time it 
would take to setup and tear down a clean wiki). So, a selenium-based 
regression test comprising 180 individual tests would take around 30 
minutes.

Not too bad. But, things are a bit more complicated. Each individual test 
runs multiple times, once for every browser/OS combination chosen for the 
regression test. For example, right now there are 13 configured browser/
OS combinations on the WMF Selenium Grid (see http://
grid.tesla.usability.wikimedia.org/console). So even if you only test 4 
of these browser/OS configurations, the regression test (if individual 
tests run serially) would take 2 hours. If you test 8 of them, it would 
take 4 hours.

This is starting to get onerous. If an individual developer wishes to 
ensure his modifications don't break things before committing his 
changes, then waiting 4 hours for a regression test to complete is a 
pretty heavy penalty. Generally, very few will pay the price.

So, running the individual tests of a selenium-based regression test 
serially is not very attractive. This means you need to achieve some 
concurrency in the regression test. Since individual tests may interfere 
with each other, you need a way to protect them from each other. This is 
what the switching functionality is for. You switch in a base set of 
temporary resources for each test (or perhaps more likely for a 
particular test suite comprised of mulitple individual tests) consisting 
of a db, images directory, etc. This allows tests to run without 
interfering with each other.


-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-20 Thread Dan Nessett
On Mon, 20 Sep 2010 22:32:24 +0200, Platonides wrote:

 Dan Nessett wrote:
 On Sun, 19 Sep 2010 23:42:08 +0200, Platonides wrote:
 You load originaldb.objectcache, retrieve the specific configuration,
 and switch into it.
 For supporting many sumyltaneous configurations, the keyname could
 have the instance (whatever that cookie is set to) appended, although
 those dynamic configurations make me a bit nervous.
 
 Well, this may work, but consider the following.
 
 A nightly build environment (and even a local developer test
 environment) tests the latest revision using a suite of regression
 tests. These tests exercise the same wiki code, each parametrized by:
 
 + Browser type (e.g., Firefox, IE, Safari, Opera) + Database (e.g.,
 MySQL, Postgres, SQLite) + OS platform (e.g., Linux, BSD unix variant,
 Windows variant)
 
 A particular test environment may not support all permutations of these
 parameters (in particular a local developer environment may support
 only one OS), but the code mechanism for supporting the regression
 tests should. To ensure timely completion of these tests, they will
 almost certainly run concurrently.
 
 So, when a regression test runs, it must not only retrieve the
 configuration data associated with it, it must create a test run
 environment (e.g., a test db, a test images directory, test cache
 data). The creation of this test run environment requires an identifier
 somewhere so its resources may be reclaimed when the test run completes
 or after an abnormal end of the test run.
 
 Thus, the originaldb must not only hold configuration data with db
 keys identifying the particular test and its parameters, but also an
 identifier for the test run that can be used to reclaim resources if
 the test ends abnormally. The question is whether using a full wiki db
 for this purpose is advantageous or whether stripping out all of the
 other tables except the objectcache table is the best implementation
 strategy.
 
 Such originaldb would be empty for an instance used just for regression
 testing and could in fact only contain the objectcache table. If it's a
 developer machine he would use the originaldb for local testing, but a
 nigthly would not need to (in fact, errors trying to access those
 missing tables would be useful for detecting errors in the isolating
 system).

Sounds reasonable. Using this approach, here is how I see the logical 
flow of a test run. (NB: by a test run, I mean an execution of a test in 
the regression test set).

The wiki under test is set up with a master database consisting of a 
single objectcache table. The entries of this table specify a test run 
identifier as primary key and temporary resource identifiers as dependent 
fields.

Setup of a test run requires the creation of the test run temporary 
resources and a entry in the objectcache table. A cookie or other state 
is returned to the testing application. This state is provided by the 
testing application during the test run on each request to the wiki under 
test.

When a request is sent to the wiki under test, very early in the request 
processing (e.g., immediately after LocalSettings is processed) a hook is 
called with the provided state information as an argument that accesses 
the objectcache table. The extension function handling the hook switches 
in the temporary resources and returns.

After the test run completes, the testing application cleans up the test 
run by requesting the deletion of the temporary resources and the 
objectcache table entry associated with the test run.

In order to handle prematurely abandoned test runs, the objectcache table 
entry probably needs an entry that specifies its lifetime. If this 
lifetime expires, the temporary resources associated with the entry are 
reclaimed and the entry is delete.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-19 Thread Dan Nessett
On Fri, 17 Sep 2010 19:13:33 +, Dan Nessett wrote:

 On Fri, 17 Sep 2010 18:40:53 +, Dan Nessett wrote:
 
 I have been tasked to evaluate whether we can use the parserTests db
 code for the selenium framework. I just looked it over and have serious
 reservations. I would appreciate any comments on the following
 analysis.
 
 The environment for selenium tests is different than that for
 parserTests. It is envisioned that multiple concurrent tests could run
 using the same MW code base. Consequently, each test run must:
 
 + Use a db that if written to will not destroy other test wiki
 information.
 + Switch in a new images and math directory so any writes do not
 interfere with other tests.
 + Maintain the integrity of the cache.
 
 Note that tests would *never* run on a production wiki (it may be
 possible to do so if they do no writes, but safety considerations
 suggest they should always run on a test data, not production data). In
 fact production wikis should always retain the setting
 $wgEnableSelenium = false, to ensure selenium test are disabled.
 
 Given this background, consider the following (and feel free to comment
 on it):
 
 parserTests temporary table code:
 
 A fixed set of tables are specified in the code. parserTests creates
 temporary tables with the same name, but using a different static
 prefix. These tables are used for the parserTests run.
 
 Problems using this approach for selenium tests:
 
  + Selenium tests on extensions may require use of extension specific
 tables, the names of which cannot be elaborated in the code.
 
 + Concurrent test runs of parserTests are not supported, since the
 temporary tables have fixed names and therefore concurrent writes to
 them by parallel test runs would cause interference.
 
 + Clean up from aborted runs requires dropping fossil tables. But, if a
 previous run tested an extension with extension-specific tables, there
 is no way for a test of some other functionality to figure out which
 tables to drop.
 
 For these reasons, I don't think we can reuse the parserTests code.
 However, I am open to arguments to the contrary.
 
 After reflection, here are some other problems.
 
 + Some tests assume the existence of data in the db. For example, the
 PagedTiffHandler tests assume the image Multipage.tiff is already
 loaded. However, this requires an entry in the image table. You could
 modify the test to clone the existing image table, but that means you
 have problems with:
 
 + Some tests assume certain data is *not* in the db. PagedTiffHandler
 has tests that upload images. These cannot already be in the images
 table. So, you can't simply clone the images table.
 
 All of this suggests to me that a better strategy is:
 
 + When the test run begins, clone a db associated with the test suite.
 
 + Switch the wiki to use this db and return a cookie or some other state
 information that identifies this test run configuration.
 
 + When the test suite runs, each wiki access supplies this state so the
 wiki code can switch in the correct db.
 
 + Cleanup of test runs requires removing the cloned db.
 
 + To handled aborted runs, there needs to be a mechanism to time out
 cloned dbs and the state associated with the test run.

Regardless of how we implement the persistent storage for managing test 
runs, there needs to be a way to trigger it use. To minimize the changes 
to core, we need a hook that runs after processing LocalSettings (and by 
implication DefaultSettings), but before any wiki state is accessed 
(e.g., before accessing the db, the images directory, any cached data). I 
looked at the existing hooks, but so far have not found one that appears 
suitable.

So, either we need to identify an appropriate existing hook, or we need 
to add a hook that meets the requirements.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-19 Thread Dan Nessett
On Sun, 19 Sep 2010 02:47:00 +0200, Platonides wrote:

 Dan Nessett wrote:
 What about memcached?
 (that would be a key based on the original db name)
 
 The storage has to be persistent to accommodate wiki crashes (e.g.,
 httpd crash, server OS crash, power outage). It might be possible to
 use memcachedb, but as far as I am aware that requires installing
 Berkeley DB, which complicated deployment.
 
 Why not employ the already installed DB software used by the wiki? That
 provides persistent storage and requires no additional software.
 
 My original idea was to use whatever ObjectCache the wiki used, but it
 could be forced to use the db as backend (that's the objectcache table).

My familiarity with the ObjectCache is casual. I presume it holds data 
that is set on particular wiki access requests and that data is then used 
on subsequent requests to make them more efficient. If so, then using a 
common ObjectCache for all concurrent test runs would cause interference 
between them. To ensure such interference doesn't exist, we would need to 
switch in a per-test-run ObjectCache (which takes us back to the idea of 
using a per-test-run db, since the ObjectCache is implemented using the 
objectcache table).

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-19 Thread Dan Nessett
On Sun, 19 Sep 2010 23:42:08 +0200, Platonides wrote:

 Dan Nessett wrote:
 Platonides wrote:
 Dan Nessett wrote:
 What about memcached?
 (that would be a key based on the original db name)

 The storage has to be persistent to accommodate wiki crashes (e.g.,
 httpd crash, server OS crash, power outage). It might be possible to
 use memcachedb, but as far as I am aware that requires installing
 Berkeley DB, which complicated deployment.

 Why not employ the already installed DB software used by the wiki?
 That provides persistent storage and requires no additional software.

 My original idea was to use whatever ObjectCache the wiki used, but it
 could be forced to use the db as backend (that's the objectcache
 table).
 
 My familiarity with the ObjectCache is casual. I presume it holds data
 that is set on particular wiki access requests and that data is then
 used on subsequent requests to make them more efficient. If so, then
 using a common ObjectCache for all concurrent test runs would cause
 interference between them. To ensure such interference doesn't exist,
 we would need to switch in a per-test-run ObjectCache (which takes us
 back to the idea of using a per-test-run db, since the ObjectCache is
 implemented using the objectcache table).
 
 You load originaldb.objectcache, retrieve the specific configuration,
 and switch into it.
 For supporting many sumyltaneous configurations, the keyname could have
 the instance (whatever that cookie is set to) appended, although those
 dynamic configurations make me a bit nervous.

Well, this may work, but consider the following.

A nightly build environment (and even a local developer test environment) 
tests the latest revision using a suite of regression tests. These tests 
exercise the same wiki code, each parametrized by:

+ Browser type (e.g., Firefox, IE, Safari, Opera)
+ Database (e.g., MySQL, Postgres, SQLite)
+ OS platform (e.g., Linux, BSD unix variant, Windows variant)

A particular test environment may not support all permutations of these 
parameters (in particular a local developer environment may support only 
one OS), but the code mechanism for supporting the regression tests 
should. To ensure timely completion of these tests, they will almost 
certainly run concurrently.

So, when a regression test runs, it must not only retrieve the 
configuration data associated with it, it must create a test run 
environment (e.g., a test db, a test images directory, test cache data). 
The creation of this test run environment requires an identifier 
somewhere so its resources may be reclaimed when the test run completes 
or after an abnormal end of the test run.

Thus, the originaldb must not only hold configuration data with db keys 
identifying the particular test and its parameters, but also an 
identifier for the test run that can be used to reclaim resources if the 
test ends abnormally. The question is whether using a full wiki db for 
this purpose is advantageous or whether stripping out all of the other 
tables except the objectcache table is the best implementation strategy.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-18 Thread Dan Nessett
On Sun, 19 Sep 2010 00:28:42 +0200, Platonides wrote:

 Where to store the data is an open question, one that requires
 consultation with others. However, here are some thoughts:
 
 + The data must be persistent. If the wiki crashes for some reason,
 there may be cloned dbs and test-specific copies of images and
 images/math hanging around. (Depending how we handle the cache
 information, there may also be fossil cache data). This requires
 cleanup after a wiki crash.
 
 + It would be possible to store the data in a file or in a master db
 table. Which is best (or if something else is better) is a subject for
 discussion.
 
 What about memcached?
 (that would be a key based on the original db name)

The storage has to be persistent to accommodate wiki crashes (e.g., httpd 
crash, server OS crash, power outage). It might be possible to use 
memcachedb, but as far as I am aware that requires installing Berkeley 
DB, which complicated deployment.

Why not employ the already installed DB software used by the wiki? That 
provides persistent storage and requires no additional software.


-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] using parserTests code for selenium test framework

2010-09-17 Thread Dan Nessett
I have been tasked to evaluate whether we can use the parserTests db code 
for the selenium framework. I just looked it over and have serious 
reservations. I would appreciate any comments on the following analysis.

The environment for selenium tests is different than that for 
parserTests. It is envisioned that multiple concurrent tests could run 
using the same MW code base. Consequently, each test run must:

+ Use a db that if written to will not destroy other test wiki 
information.
+ Switch in a new images and math directory so any writes do not 
interfere with other tests.
+ Maintain the integrity of the cache.

Note that tests would *never* run on a production wiki (it may be 
possible to do so if they do no writes, but safety considerations suggest 
they should always run on a test data, not production data). In fact 
production wikis should always retain the setting $wgEnableSelenium = 
false, to ensure selenium test are disabled.

Given this background, consider the following (and feel free to comment 
on it):

parserTests temporary table code:

A fixed set of tables are specified in the code. parserTests creates 
temporary tables with the same name, but using a different static prefix. 
These tables are used for the parserTests run.

Problems using this approach for selenium tests:

 + Selenium tests on extensions may require use of extension specific 
tables, the names of which cannot be elaborated in the code.

+ Concurrent test runs of parserTests are not supported, since the 
temporary tables have fixed names and therefore concurrent writes to them 
by parallel test runs would cause interference.

+ Clean up from aborted runs requires dropping fossil tables. But, if a 
previous run tested an extension with extension-specific tables, there is 
no way for a test of some other functionality to figure out which tables 
to drop.

For these reasons, I don't think we can reuse the parserTests code. 
However, I am open to arguments to the contrary.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-17 Thread Dan Nessett
On Fri, 17 Sep 2010 18:40:53 +, Dan Nessett wrote:

 I have been tasked to evaluate whether we can use the parserTests db
 code for the selenium framework. I just looked it over and have serious
 reservations. I would appreciate any comments on the following analysis.
 
 The environment for selenium tests is different than that for
 parserTests. It is envisioned that multiple concurrent tests could run
 using the same MW code base. Consequently, each test run must:
 
 + Use a db that if written to will not destroy other test wiki
 information.
 + Switch in a new images and math directory so any writes do not
 interfere with other tests.
 + Maintain the integrity of the cache.
 
 Note that tests would *never* run on a production wiki (it may be
 possible to do so if they do no writes, but safety considerations
 suggest they should always run on a test data, not production data). In
 fact production wikis should always retain the setting $wgEnableSelenium
 = false, to ensure selenium test are disabled.
 
 Given this background, consider the following (and feel free to comment
 on it):
 
 parserTests temporary table code:
 
 A fixed set of tables are specified in the code. parserTests creates
 temporary tables with the same name, but using a different static
 prefix. These tables are used for the parserTests run.
 
 Problems using this approach for selenium tests:
 
  + Selenium tests on extensions may require use of extension specific
 tables, the names of which cannot be elaborated in the code.
 
 + Concurrent test runs of parserTests are not supported, since the
 temporary tables have fixed names and therefore concurrent writes to
 them by parallel test runs would cause interference.
 
 + Clean up from aborted runs requires dropping fossil tables. But, if a
 previous run tested an extension with extension-specific tables, there
 is no way for a test of some other functionality to figure out which
 tables to drop.
 
 For these reasons, I don't think we can reuse the parserTests code.
 However, I am open to arguments to the contrary.

After reflection, here are some other problems.

+ Some tests assume the existence of data in the db. For example, the 
PagedTiffHandler tests assume the image Multipage.tiff is already loaded. 
However, this requires an entry in the image table. You could modify the 
test to clone the existing image table, but that means you have problems 
with:

+ Some tests assume certain data is *not* in the db. PagedTiffHandler has 
tests that upload images. These cannot already be in the images table. 
So, you can't simply clone the images table.

All of this suggests to me that a better strategy is:

+ When the test run begins, clone a db associated with the test suite.

+ Switch the wiki to use this db and return a cookie or some other state 
information that identifies this test run configuration.

+ When the test suite runs, each wiki access supplies this state so the 
wiki code can switch in the correct db.

+ Cleanup of test runs requires removing the cloned db.

+ To handled aborted runs, there needs to be a mechanism to time out 
cloned dbs and the state associated with the test run.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-17 Thread Dan Nessett
On Fri, 17 Sep 2010 21:05:12 +0200, Platonides wrote:

 Dan Nessett wrote:
 Given this background, consider the following (and feel free to comment
 on it):
 
 parserTests temporary table code:
 
 A fixed set of tables are specified in the code. parserTests creates
 temporary tables with the same name, but using a different static
 prefix. These tables are used for the parserTests run.
 
 Problems using this approach for selenium tests:
 
  + Selenium tests on extensions may require use of extension specific
 tables, the names of which cannot be elaborated in the code.
 
 The extensions could list their table names. No problem there.
 
 
 + Concurrent test runs of parserTests are not supported, since the
 temporary tables have fixed names and therefore concurrent writes to
 them by parallel test runs would cause interference.
 
 So it gets changed to a random name with a fixed prefix... What concerns
 me is
 
 + Clean up from aborted runs requires dropping fossil tables. But, if a
 previous run tested an extension with extension-specific tables, there
 is no way for a test of some other functionality to figure out which
 tables to drop.
 
 Run a script dropping all tables with a fixed prefix (the shared part of
 the tests) when you have no tests running.
 
 For these reasons, I don't think we can reuse the parserTests code.
 However, I am open to arguments to the contrary.
 
 There may be other issues with that code, and using a separate db would
 be preferable if you have enough permissions, but this doesn't seem like
 real problems.
 
 What concerns me is that Oracle is using (r58669) a different prefix for
 the parsertests table. If it has some restriction on [medium-large]
 table names, there may not be possible to run the tests there using the
 long table names that we could produce.

The strategy you suggest is reasonable. But, I think it requires 
significant changes to the parserTests code. The question then is: is it 
simpler to modify this code or just write something new?


-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] using parserTests code for selenium test framework

2010-09-17 Thread Dan Nessett
On Sat, 18 Sep 2010 00:53:04 +0200, Platonides wrote:

 
 + Switch the wiki to use this db and return a cookie or some other
 state information that identifies this test run configuration.
 
 I think you mean for remote petitions, not just for internal queries,
 where do you expect to store that data?

Not sure what you mean by remote petitions.

Selenium requests always come through the web portal from the selenium 
server. So, no internal queries are involved.

Where to store the data is an open question, one that requires 
consultation with others. However, here are some thoughts:

+ The data must be persistent. If the wiki crashes for some reason, there 
may be cloned dbs and test-specific copies of images and images/math 
hanging around. (Depending how we handle the cache information, there may 
also be fossil cache data). This requires cleanup after a wiki crash.

+ It would be possible to store the data in a file or in a master db 
table. Which is best (or if something else is better) is a subject for 
discussion.

We may be able to use the mechanisms in the code that supports access to 
different language versions of a wiki (e.g., Wikipedia) using the same 
code. I am not familiar with these mechanisms, so this approach requires 
help from someone who is.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Keeping record of imported licensed text

2010-09-17 Thread Dan Nessett
On Fri, 10 Sep 2010 23:11:27 +, Dan Nessett wrote:

 We are currently attempting to refactor some specific modifications to
 the standard MW code we use (1.13.2) into an extension so we can upgrade
 to a more recent maintained version. One modification we have keeps a
 flag in the revisions table specifying that article text was imported
 from WP. This flag generates an attribution statement at the bottom of
 the article that acknowledges the import.
 
 I don't want to start a discussion about the various legal issues
 surrounding text licensing. However, assuming we must acknowledge use of
 licensed text, a legitimate technical issue is how to associate state
 with an article in a way that records the import of licensed text. I
 bring this up here because I assume we are not the only site that faces
 this issue.
 
 Some of our users want to encode the attribution information in a
 template. The problem with this approach is anyone can come along and
 remove it. That would mean the organization legally responsible for the
 site would entrust the integrity of site content to any arbitrary
 author. We may go this route, but for the sake of this discussion I
 assume such a strategy is not viable. So, the remainder of this post
 assumes we need to keep such licensing state in the db.
 
 After asking around, one suggestion was to keep the licensing state in
 the page_props table. This seems very reasonable and I would be
 interested in comments by this community on the idea. Of course, there
 has to be a way to get this state set, but it seems likely that could be
 achieved using an extension triggered when an article is edited.
 
 Since this post is already getting long, let me close by asking whether
 support for associating licensing information with articles might be
 useful to a large number of sites. If so, the perhaps it belongs in the
 core.

One thing I haven't seen so far (probably because it doesn't belong on 
Wikitech) is a discussion of the policy requirements. In open source 
software development, you have to carry forward licenses even if you 
substantially change the code content. The only way around this is a 
clean room implementation (e.g., how BSD Unix got around ATT's 
original licensing for Unix).

Is this also true for textual content? If so, then once you import such 
content into an article you are obliged to carry forward any licensing 
conditions on that import on for all subsequent revisions.

Where is the proper place to discuss these kinds of questions?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Keeping record of imported licensed text

2010-09-14 Thread Dan Nessett
On Fri, 10 Sep 2010 23:11:27 +, Dan Nessett wrote:

 We are currently attempting to refactor some specific modifications to
 the standard MW code we use (1.13.2) into an extension so we can upgrade
 to a more recent maintained version. One modification we have keeps a
 flag in the revisions table specifying that article text was imported
 from WP. This flag generates an attribution statement at the bottom of
 the article that acknowledges the import.
 
 I don't want to start a discussion about the various legal issues
 surrounding text licensing. However, assuming we must acknowledge use of
 licensed text, a legitimate technical issue is how to associate state
 with an article in a way that records the import of licensed text. I
 bring this up here because I assume we are not the only site that faces
 this issue.
 
 Some of our users want to encode the attribution information in a
 template. The problem with this approach is anyone can come along and
 remove it. That would mean the organization legally responsible for the
 site would entrust the integrity of site content to any arbitrary
 author. We may go this route, but for the sake of this discussion I
 assume such a strategy is not viable. So, the remainder of this post
 assumes we need to keep such licensing state in the db.
 
 After asking around, one suggestion was to keep the licensing state in
 the page_props table. This seems very reasonable and I would be
 interested in comments by this community on the idea. Of course, there
 has to be a way to get this state set, but it seems likely that could be
 achieved using an extension triggered when an article is edited.
 
 Since this post is already getting long, let me close by asking whether
 support for associating licensing information with articles might be
 useful to a large number of sites. If so, the perhaps it belongs in the
 core.

The discussion about whether to support license data in the database has 
settled down. There seems to be some support. So, I think the next step 
is to determine the best technical approach. Below I provide a strawman 
proposal. Note that this is only to foster discussion on technical 
requirements and approaches. I have nothing invested in the strawman.

Implementation location: In an extension

Permissions: include two new permissions - 1) addlicensedata, and 2) 
modifylicensedata. These are pretty self-explanatory. Sites that wish to 
give all users the ability to provide and modify licensing data would 
assign these permissions to everyone. Sites that wish to allow all users 
to add licensing data, but restrict those who are allowed to modify it, 
would give the first permission to everyone and the second to a limited 
group.

Database schema: Add a licensing table to the db with the following 
columns - 1) revision_or_image, 2) revision_id, 3) image_id, 4) 
content_source, 5) license_id, 6) user_id.

The first three columns identify the revision or image to which the 
licensing data is associated. I am not particularly adept with SQL, so 
there may be a better way to do this. The content_source column is a 
string that is a URL or other reference that specifies the source of the 
content under license. The license_id identifies the specific license for 
the content. The user_id identifies the user that added the licensing 
information. The user_id may be useful if a site wishes to allow someone 
who added the licensing information to delete or modify it. However, 
there are complications with this. Since IP addresses are easily spoofed, 
it would mean this entry should only be valid for logged in users.

Add a license table with the following columns - 1) license_id, 2) 
license_text, 3) license name and 4) license_version. The license_id in 
the licensing table references rows in this table.

One complication is when a page or image is reverted, the licensing table 
must be modified to reflect the current state.

Data manipulation: The extension would use suitable hooks to insert, 
modify and render licensing data. Insertion and modification would 
probably use a relevant Edit Page or Article Management hook. Rendering 
would probably use a Page Rendering Hook.

Page rendering: You probably don't want to dump licensing data directly 
onto a page. Instead, it is preferable to output a short licensing 
statement like:

Content on this page uses licensed content. For details, see licensing 
data.

The phrase licensing data would be a link to a special page that 
accesses the licensing table and displays the license data associated 
with the page.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Selenium Framework - standard directory for selenium tests

2010-09-13 Thread Dan Nessett
Are there any standards for where to put selenium tests? Right now the 
Simple Selenium test is in phase3/maintenance/tests/selenium and the 
PagedTiffHandler selenium tests are in PagedTiffHandler/selenium. This 
suggests a convention of putting extension selenium test files in a sub-
directory of the top-level directory named 'selenium'. Is that an 
official convention?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Keeping record of imported licensed text

2010-09-10 Thread Dan Nessett
We are currently attempting to refactor some specific modifications to 
the standard MW code we use (1.13.2) into an extension so we can upgrade 
to a more recent maintained version. One modification we have keeps a 
flag in the revisions table specifying that article text was imported 
from WP. This flag generates an attribution statement at the bottom of 
the article that acknowledges the import.

I don't want to start a discussion about the various legal issues 
surrounding text licensing. However, assuming we must acknowledge use of 
licensed text, a legitimate technical issue is how to associate state 
with an article in a way that records the import of licensed text. I 
bring this up here because I assume we are not the only site that faces 
this issue.

Some of our users want to encode the attribution information in a 
template. The problem with this approach is anyone can come along and 
remove it. That would mean the organization legally responsible for the 
site would entrust the integrity of site content to any arbitrary author. 
We may go this route, but for the sake of this discussion I assume such a 
strategy is not viable. So, the remainder of this post assumes we need to 
keep such licensing state in the db.

After asking around, one suggestion was to keep the licensing state in 
the page_props table. This seems very reasonable and I would be 
interested in comments by this community on the idea. Of course, there 
has to be a way to get this state set, but it seems likely that could be 
achieved using an extension triggered when an article is edited.

Since this post is already getting long, let me close by asking whether 
support for associating licensing information with articles might be 
useful to a large number of sites. If so, the perhaps it belongs in the 
core.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium Framework - test run configuration data

2010-09-07 Thread Dan Nessett
On Mon, 06 Sep 2010 23:15:06 -0400, Mark A. Hershberger wrote:

 Dan Nessett dness...@yahoo.com writes:
 
 Last Friday, mah ripped out the globals and put the configuration
 information into the execute method of RunSeleniumTests.php with the
 comment @todo Add an alternative where settings are read from an INI
 file. So, it seems we have dueling developers with contrary ideas
 about what is the best way to configure selenium framework tests.
 
 I'm opposed to increasing global variables and I think I understand
 Tim's concern about configuring via a PHP file.
 
 I plan to start work on reading the configuration from an INI file
 (*not* a PHP file).
 
 Either approach works. But, by going back and forth, it makes
 development of functionality for the Framework difficult.
 
 I agree.
 
 The idea I was pursuing is to encapsulate configuration in a Selenium
 object that (right now) RunSeleniumTests.php will set up.
 
 Platonides suggestion of a hook to provide configuration is also doable.
 
 Mark.

I am pretty much agreeable to any solution that remains stable. One thing 
that may not be obvious is there may be configuration data over and above 
that specified on the RunSeleniumTests command line.

For example, it is inconvenient to have to start up the selenium server 
before running RunSeleniumTests. (In the past I have frequently executed 
RunSeleniumTests only to get an error because I forgot to start the 
server). I supplied a patch to Markus recently that adds two options to 
the command line, one to start the server and the other to stop it (the 
patch supplies functionality only for *nix systems, which is probably why 
Markus has not committed it - there needs to be similar support for 
Windows). This functionality requires a directory path to the selenium 
server jar file. This is not something that a tester would normally 
supply on a command line. It is a system, not a test run parameter.

So, I hope the INI file processing you are working on will allow the 
specification of such data.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium Framework - test run configuration data

2010-09-07 Thread dan nessett
I would be happy to coordinate with you. Up to this point I have been working 
with Markus Glaser and that has gone pretty well. But, I would welcome more 
discussion on issues, architecture and implementation strategy for the 
Framework as well as making sure anything I do fits in with what others are 
doing.

Regards,

Dan

--- On Tue, 9/7/10, Mark A. Hershberger m...@everybody.org wrote:

 From: Mark A. Hershberger m...@everybody.org
 Subject: Re: Selenium Framework - test run configuration data
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Cc: Dan Nessett dness...@yahoo.com
 Date: Tuesday, September 7, 2010, 8:04 AM
 Dan Nessett dness...@yahoo.com
 writes:
 
  Either approach works. But, by going back and forth,
 it makes development
  of functionality for the Framework difficult.
 
 I should also point out that what I was planning to do was
 not hidden.
 I wrote about these changes in my weekly report the Monday
 before I
 committed them (http://bit.ly/cqAcqz) and pointed to the
 weekly report
 from my Ohloh, Twitter, Identi.ca and Facebook accounts.
 
 Granted, this is not the same as posting to the mailing
 list, and for
 that I apologize.
 
 Looking back in the archives on gmane, it looks like you
 are very
 interested in MW testing.  Since this is a large part
 of my focus
 currently as well, perhaps we should coordinate our work?
 
 Mark.
 
 -- 
 http://hexmode.com/
 
 Embrace Ignorance.  Just don't get too attached.
 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium Framework - test run configuration data

2010-09-07 Thread Dan Nessett
On Tue, 07 Sep 2010 17:39:19 +0200, Markus Glaser wrote:

 Hi all,
 
 I suggest we have one static class variable Selenium::$settings which is
 set up as an array. This array would be filled in some init function
 from whatever sources we decide on. Then, internally, the configuration
 mechanisms would not change anymore, and we could use the init method to
 fill the settings from globals (as is) or ini files (as Mark propses).
 Those who use the framework, however, would not have to rewrite their
 code.
 
 Regards,
 Markus
 
 Markus Glaser
 Social Web Technologien
 Leiter Softwareentwicklung
 Hallo Welt! - Medienwerkstatt GmbH
 __
 
 Untere Bachgasse 15
 93047 Regensburg
 
 Tel.   +49 (0) 941 - 56 95 94 92
 Fax.  +49 (0) 941 - 50 27 58 13
 
 
 www.hallowelt.biz
 gla...@hallowelt.biz
 
 Sitz: Regensburg
 Handelsregister: HRB 10467
 E.USt.Nr.: DE 253050833
 Geschäftsführer:
 Anja Ebersbach, Markus Glaser,
 Dr. Richard Heigl, Radovan Kubani
 
 
 -Ursprüngliche Nachricht-
 Von: wikitech-l-boun...@lists.wikimedia.org
 [mailto:wikitech-l-boun...@lists.wikimedia.org] Im Auftrag von dan
 nessett Gesendet: Dienstag, 7. September 2010 17:20 An: Wikimedia
 developers; Mark A. Hershberger Betreff: Re: [Wikitech-l] Selenium
 Framework - test run configuration data
 
 I would be happy to coordinate with you. Up to this point I have been
 working with Markus Glaser and that has gone pretty well. But, I would
 welcome more discussion on issues, architecture and implementation
 strategy for the Framework as well as making sure anything I do fits in
 with what others are doing.
 
 Regards,
 
 Dan
 
 --- On Tue, 9/7/10, Mark A. Hershberger m...@everybody.org wrote:
 
 From: Mark A. Hershberger m...@everybody.org Subject: Re: Selenium
 Framework - test run configuration data To: Wikimedia developers
 wikitech-l@lists.wikimedia.org Cc: Dan Nessett dness...@yahoo.com
 Date: Tuesday, September 7, 2010, 8:04 AM Dan Nessett
 dness...@yahoo.com
 writes:
 
  Either approach works. But, by going back and forth,
 it makes development
  of functionality for the Framework difficult.
 
 I should also point out that what I was planning to do was not hidden.
 I wrote about these changes in my weekly report the Monday before I
 committed them (http://bit.ly/cqAcqz) and pointed to the weekly report
 from my Ohloh, Twitter, Identi.ca and Facebook accounts.
 
 Granted, this is not the same as posting to the mailing list, and for
 that I apologize.
 
 Looking back in the archives on gmane, it looks like you are very
 interested in MW testing.  Since this is a large part of my focus
 currently as well, perhaps we should coordinate our work?
 
 Mark.
 
 --
 http://hexmode.com/
 
 Embrace Ignorance.  Just don't get too attached.
 
 
 
 
   
 ___ Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

This seems like a reasonable strategy to me.

Dan


-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Testing Framework

2010-08-09 Thread Dan Nessett
On Sat, 07 Aug 2010 23:30:16 -0400, Mark A. Hershberger wrote:

 Dan Nessett dness...@yahoo.com writes:
 
 I don't think walking through all the extensions looking for test
 subdirectories and then running all tests therein is a good idea.
 First, in a large installation with many extensions, this takes time
 and delays the test execution.
 
 Globbing for extensions/*/tests/TestSettings.php doesn't take long at
 all.
 
 However I am looking at a way to test extensions independently of
 installation.
 
 This means I can't depend on hooks or global variables, so I need
 another way to find out if an extension has tests available.
 
 Making the developer specify the extension or core tests to run on the
 RunSeleniumTests command line is irritating (at least, it would
 irritate me)
 
 No doubt.  So why not allow per-user files to set this instead of using
 LocalSettings.php?
 
 Mark.

Testing uninstalled extensions may make sense for unit tests, but not for 
selenium tests. The latter exercise the code through a browser, so the 
extension must be installed for selenium testing.

I'm not sure what are the advantages of per-user configuration files. 
For unit tests the tester directly accesses the code and so has direct 
access to LocalSettings. For selenium testing, we originally had a 
configuration file called SeleniumLocalSettings.php, but that was 
abandoned in favor of putting the configuration information in 
DefaultSettings and LocalSettings.

As stated previously, selenium tests exercise MW code by accessing the 
wiki through a browser. I don't see how a 'per-user' configuration file 
would be integrated without introducing some serious security issues.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Testing Framework (was Selenium Framework - Question on coding conventions)

2010-08-06 Thread Dan Nessett
On Thu, 05 Aug 2010 18:47:58 -0400, Mark A. Hershberger wrote:

 Markus Glaser gla...@hallowelt.biz writes:
 
 1) Where are the tests located? I suggest for core to put them into
 maintenance/tests/selenium. That is where they are now. For extensions
 I propse a similar structure, that is extensiondir/tests/selenium.
 
 Sounds fine.
 
 In the same way, since maintenance/tests contains tests that should be
 run using PHPUnit, we can say that extensiondir/tests will contain
 tests that should be run using PHPUnit.
 
 Alternatively, we could use the word Selenium somewhere in there in
 order to be able to divide between unit and selenium tests.
 
 I think putting them in the selenium directory (or the “Se” directory)
 is sufficient.
 
 3) How does the framework know there are tests?
 
 Can I suggest that the framework can see that an extension has tests
 simply by the presence of the extensiondir/tests directory containing
 a Extension*TestSuite.php file?
 
 The extensiondir/tests/ExtensionTestSuite.php file should define a
 class using the name ExtensionTestSuite which has a static method
 suite().  See the PHPUnit documentation at http://bit.ly/b9L50r for how
 this is set up.
 
 This static suite() method should take care of letting the autoloader
 know about any test classes so the test classes are only available
 during testing.
 
 So, for your example using PagedTiffHandler, there would be the files:
 
 PagedTiffHandler/tests/PagedTiffHandlerTestSuite.php
 PagedTiffHandler/tests/PagedTiffHandlerUploadsTestSuite.php
 
 4) Which tests should be executed?
 
 By default all the test suites in extensiondir/tests should be run.
 
 It is should be possible to specify which particular test to run by
 using whatever command line arguments to the CLI.
 
 This seems better to me than defining a new global.  If some tests
 should only be run rarely, that information can be put in the TestSuite
 class for te extension.
 
 In this way, I think it is possible to remove all the $wgSelenium*
 variables from the DefaultSettings.php file.
 
 (I plan to do something similar with the $wgParserTest* variables as
 well — these sorts of things don't seem like they belong in Core.)
 
 Mark.

I don't think walking through all the extensions looking for test 
subdirectories and then running all tests therein is a good idea. First, 
in a large installation with many extensions, this takes time and delays 
the test execution. Second, if a developer is working on a particular 
extension or part of the core, it will be common to run the tests 
associated with that for regression purposes. Making the developer 
specify the extension or core tests to run on the RunSeleniumTests 
command line is irritating (at least, it would irritate me). Specifying 
the test suite(s) to be run in LocalSettings.php is a set and forget 
approach that allows the developer to get on with the work.

However, I do agree that the number of global variables associated with 
the selenium framework is getting large and has the potential of growing 
over time. One solution is to use a multi-dimensional associative array 
(much like $wgGroupPermissions). We could use a global variable 
$wgSelenium and move all selenium framework values into it. For example:

$wgSelenium['wiki']['host'] = 'localhost';
$wgSelenium['wiki']['wikiurl'] = false;
$wgSelenium['wiki']['loginname'] = 'Wikiuser;
$wgSelenium['wiki']['password'] = '';
$wgSelenium['server']['port'] = ;

etc.

The only global we may wish to keep separate is $wgEnableSelenium, since 
it specifies whether $wgSelenium is used.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Testing Framework (was Selenium Framework - Question on coding conventions)

2010-08-06 Thread Dan Nessett
On Fri, 06 Aug 2010 12:38:39 -0400, Aryeh Gregor wrote:

 On Thu, Aug 5, 2010 at 6:47 PM, Mark A. Hershberger m...@everybody.org
 wrote:
 Can I suggest that the framework can see that an extension has tests
 simply by the presence of the extensiondir/tests directory containing
 a Extension*TestSuite.php file?
 
 IMO, the way parser tests do it is smarter.  When you install the
 extension, it adds the location of the test files to $wgParserTestFiles.
  That way, only the tests associated with installed extensions will run.
  If you want to force only particular tests to run all the time, you can
 also modify the variable in LocalSettings.php as usual.

We are doing something similar. In the extension require() file, the test 
suite is added to $wgAutoloadClasses. Right now the entry in 
$wgSeleniumTestSuites is pushed in LocalSettings. However, we could 
establish the convention that it is pushed in the extension require() 
file as well. Then all extensions with test suites would automatically 
load them. To tailor this, the entries in $wgSeleniumTestSuites could be
removed in LocalSettings.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] revision of Nuke that works with postgres

2010-05-31 Thread Dan Nessett
The Nuke extension doesn't work with postgres (https://
bugzilla.wikimedia.org/show_bug.cgi?id=23600). Is there a revision that 
contains a version that does? Right now (for 1.13.2) the snapshot 
returned by the Nuke extension page is: r37906. This produces the error 
given in the bug ticket.

Regards,

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework- Firefox browsers compatible

2010-05-27 Thread Dan Nessett
On Wed, 26 May 2010 17:11:33 -0700, Michelle Knight wrote:

 Hi Dan,
 
 There is a list of browsers compatible with Selenium (See
 http://seleniumhq.org/about/platforms.html#browsers ). The page states
 that Selenium works with Firefox 2+ when a Linux OS is used (I think
 Ubuntu would fall under this category ).
 
  I am using Firefox 3.5.9 on Ubuntu 9.10 . I have been finishing another
 project (my grandfather visited me in Oregon from Ohio) and have not
 played with the at the Selenium Framework since May 14th. I will let you
 know if I see the error messages.
 
 Michelle Knight
 
 
 
 Message: 5
 Date: Tue, 18 May 2010 17:44:03 + (UTC) From: Dan Nessett
 dness...@yahoo.com Subject: Re: [Wikitech-l] Selenium testing
 framework To: wikitech-l@lists.wikimedia.org
 Message-ID: hsujl3$v7...@dough.gmane.org Content-Type: text/plain;
 charset=UTF-8
 
 On Tue, 18 May 2010 19:27:38 +0200, Markus Glaser wrote:
 
 Hi Dan,

 I had these error messages once when I used Firefox 3.6 for testing.
 Until recently, Selenium did not support this browser. Apparently now
 they do, but I did not have a chance to test this yet. So the solution
 for me was to point Selenium to a Firefox 3.5.

 Cheers,
 Markus
 
 My OS is Ubuntu 8.04. The version of Firefox is 3.0.19. Since Ubuntu
 automatically updates versions of its software, I assume this is the
 most up-to-date.
 
 Is there a list of browser versions compatible with selenium?

Thanks for the pointer to the list, Michelle. As it turned out there was 
bug in RunSeleniumTests that accessed global data before 
LocalSeleniumSettings was included. Markus has fixed this problem and is 
testing it before checking it in to the repository. Before this fix is 
available, you should put all your local configuration data in 
RunSeleniumTests.

Regards,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-24 Thread Dan Nessett
On Sun, 02 May 2010 02:20:06 +0200, Markus Glaser wrote:

 Hi everybody,
 
 at the Wkimedia Developers' Workshop, I introduced a Selenium testing
 framework for MediaWiki. Since it has now been promoted to
 maintenance/tests,  I have provided some initial information it on
 http://www.mediawiki.org/wiki/SeleniumFramework . I would be very happy
 about comments and ideas for further improvement. Also, if you intend to
 use the framework for your tests, please let me know. I will be happy to
 assist.
 
 Regards,
 Markus Glaser
 
 __
 
 Social Web Technologien
 Leiter Softwareentwicklung
 Hallo Welt! - Medienwerkstatt GmbH
 
 __
 
 Untere Bachgasse 15
 93047 Regensburg
 
 Tel.  +49 (0) 941 - 56 95 94 - 92
 
 www.hallowelt.bizhttp://www.hallowelt.biz/
 gla...@hallowelt.bizmailto:gla...@hallowelt.biz
 
 Sitz: Regensburg
 Handelsregister: HRB 10467
 E.USt.Nr.: DE 253050833
 Geschäftsführer:
 Anja Ebersbach, Markus Glaser,
 Dr. Richard Heigl, Radovan Kubani
 
 __

Hi Markus,

Despite my initial problems getting the Selenium Framework to run, I 
think it is a great start. Now that I have the PagedTiffHandler working, 
here is some feedback on the current framework:

+ When I svn up ../tests (or any ancestor directory), the local changes I 
make to RunSeleniumTests cause a local conflict error. Eventually, many 
of the configuration changes I made should appear in 
LocalSeleniumSettings, but it isn't clear that is possible for all of 
them. For example, I change the commented out set_include_path to include 
my local PHP/PEAR directory. Can this be set in LocalSeleniumSettings? 
Another difference is the include_once() for each test suite. Is it 
possible to move these into LocalSeleniumSettings?

+ It appears there is no way to tell RunSeleniumTests to use a selenium 
server port other than . It would be useful to have a -port parameter 
on RunSeleniumServer for this. For example, if there are multiple 
developers working on the same machine, they probably need to use 
different selenium servers differentiated by different port numbers.

I don't mind working on both of these issues, but since you are the 
original architect of the framework, it is probably best for you to 
comment on them first and perhaps suggest what you consider to be the 
best approach to their resolution.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Selenium testing framework

2010-05-21 Thread Dan Nessett
On Mon, 17 May 2010 20:16:35 +, Dan Nessett wrote:

 On Mon, 17 May 2010 19:11:21 +, Dan Nessett wrote:
 
 During the meeting last Friday, someone (I sorry, I don't remember who)
 mentioned he had created a test that runs with the currently checked in
 selenium code. Is that test code available somewhere (it doesn't appear
 to be in the current revision)?
 
 I found the answer. On the SeleniumFramework page is a pointer to a
 worked example (see: http://www.mediawiki.org/wiki/
 SeleniumFramework#Working_example). The instructions for getting the
 tests to work aren't totally transparent. The test file you include is:
 
 ../phase3/extensions/PagedTiffHandler/selenium/
PagedTiffHandler_tests.php
 
 (Not: ../phase3/extensions/PagedTiffHandler/tests/
 PagedTiffHandlerTest.php)
 
 Also, the instructions in SOURCES.txt specify getting all of the test
 images from:
 
 http://www.libtiff.org/images.html
 
 But, when accessing the URL supplied on that page for the images (ftp://
 ftp.remotesensing.org/pub/libtiff/pics-3.6.1.tar.gz) a FILE NOT FOUND
 error is returned. There is a new version of the pics file in ..libtiff,
 but they do not contain the correct images. The correct URL is: ftp://
 ftp.remotesensing.org/pub/libtiff/old/pics-3.6.1.tar.gz. However, this
 tar file does not include the images required by the PagedTiffHandler
 tests.

I thought I would apprise readers of this thread that I have the 
PagedTiffHandler example working. The problem turned out to be a bug in 
the Framework that meant LocalSeleniumSettings wasn't being read at the 
correct point in the startup sequence. According to Markus, he has fixed 
this bug and is now testing it. I presume he will let us know when the 
fix is committed.

If you want to run the example before the fix is available, just make all 
your configuration settings to RunSeleniumTests.php.

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-18 Thread Dan Nessett
On Tue, 18 May 2010 01:04:01 +0200, Markus Glaser wrote:

 Hi Dan,
 
 the test fails at checking the prerequisites. It tries to load the image
 page and looks for a specific div element which is not present if the
 image was not uploaded correctly (id=filetoc). This might have changed
 across the versions of MediaWiki.
 
 Did you install the PagedTiffHandler extension? It depends on
 ImageMagick, so it might have rejected the upload. Although then it
 should have produced an error message ;) So the other question is, which
 MediaWiki version do you run the tests on?
 
 Regards,
 Markus

Hi Markus,

Some further information. I originally uploaded Multipage.tiff before the 
extension was installed. I thought this might be the problem, so I 
uploaded it again after the extension was available. However, this did 
not solve the problem.

Also, I am getting the following error from the selenium-server:

08:41:10.344 INFO - Got result: ERROR Server Exception: sessionId led to 
start new browser session: 
org.openqa.selenium.server.browserlaunchers.InvalidBrowserExecutableException: 
The specified path to the browser executable is invalid. doesn't exist; 
perhaps this session was already stopped? on session led to start new 
browser session: 
org.openqa.selenium.server.browserlaunchers.InvalidBrowserExecutableException: 
The specified path to the browser executable is invalid.
08:41:10.347 INFO - Command request: testComplete[, ] on session led to 
start new browser session: 
org.openqa.selenium.server.browserlaunchers.InvalidBrowserExecutableException: 
The specified path to the browser executable is invalid.
08:41:10.347 INFO - Got result: OK on session led to start new browser 
session: 
org.openqa.selenium.server.browserlaunchers.InvalidBrowserExecutableException: 
The specified path to the browser executable is invalid.

I am using a LocalSeleniumSettings.php with the following parameters:

// Hostname of selenium server
$wgSeleniumTestsSeleniumHost = 'http://localhost';
 
// URL of the wiki to be tested.
$wgSeleniumTestsWikiUrl = 'http://localhost';
 
// Wiki login. Used by Selenium to log onto the wiki
$wgSeleniumTestsWikiUser  = 'Wikiadmin';
$wgSeleniumTestsWikiPassword  = 'Wikiadminpw';

// Use the *chrome handler in order to be able to test file uploads
$wgSeleniumTestsBrowsers['firefox']   = '*firefox /usr/bin/firefox';
$wgSeleniumTestsBrowsers['ff-chrome']   = '*chrome /usr/bin/firefox';
 
// Actually, use this browser
$wgSeleniumTestsUseBrowser = 'ff-chrome';

Regards,

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-18 Thread Dan Nessett
On Tue, 18 May 2010 19:27:38 +0200, Markus Glaser wrote:

 Hi Dan,
 
 I had these error messages once when I used Firefox 3.6 for testing.
 Until recently, Selenium did not support this browser. Apparently now
 they do, but I did not have a chance to test this yet. So the solution
 for me was to point Selenium to a Firefox 3.5.
 
 Cheers,
 Markus

My OS is Ubuntu 8.04. The version of Firefox is 3.0.19. Since Ubuntu 
automatically updates versions of its software, I assume this is the most 
up-to-date.

Is there a list of browser versions compatible with selenium?



-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-18 Thread Dan Nessett
On Tue, 18 May 2010 19:27:38 +0200, Markus Glaser wrote:

 Hi Dan,
 
 I had these error messages once when I used Firefox 3.6 for testing.
 Until recently, Selenium did not support this browser. Apparently now
 they do, but I did not have a chance to test this yet. So the solution
 for me was to point Selenium to a Firefox 3.5.
 
 Cheers,
 Markus

Hi Markus,

I thought it might be best to move this discussion off-line for a bit 
until we get the problems sorted out and then post the solution(s) to 
wikitech-l. This thread is getting fairly long and is getting into fairly 
complex issues.

I tried emailing you at the address shown in your posts, but the email 
was returned as undeliverable. My email address is dness...@yahoo.com. If 
you think taking the issue off-line while we sort it out is a good thing 
to do, then email me from some address that I can use and I will update 
you on the status of my attempts to get PagedTiffHandler_tests.php to 
work. As a teaser, it appears there is a problem with the sequence of 
processing vis-a-vis LocalSettings and LocalSeleniumSettings

Cheers,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
During the meeting last Friday, someone (I sorry, I don't remember who) 
mentioned he had created a test that runs with the currently checked in 
selenium code. Is that test code available somewhere (it doesn't appear 
to be in the current revision)?

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Mon, 17 May 2010 19:11:21 +, Dan Nessett wrote:

 During the meeting last Friday, someone (I sorry, I don't remember who)
 mentioned he had created a test that runs with the currently checked in
 selenium code. Is that test code available somewhere (it doesn't appear
 to be in the current revision)?

I found the answer. On the SeleniumFramework page is a pointer to a 
worked example (see: http://www.mediawiki.org/wiki/
SeleniumFramework#Working_example). The instructions for getting the 
tests to work aren't totally transparent. The test file you include is:

../phase3/extensions/PagedTiffHandler/selenium/PagedTiffHandler_tests.php

(Not: ../phase3/extensions/PagedTiffHandler/tests/
PagedTiffHandlerTest.php)

Also, the instructions in SOURCES.txt specify getting all of the test 
images from:

http://www.libtiff.org/images.html

But, when accessing the URL supplied on that page for the images (ftp://
ftp.remotesensing.org/pub/libtiff/pics-3.6.1.tar.gz) a FILE NOT FOUND 
error is returned. There is a new version of the pics file in ..libtiff, 
but they do not contain the correct images. The correct URL is: ftp://
ftp.remotesensing.org/pub/libtiff/old/pics-3.6.1.tar.gz. However, this 
tar file does not include the images required by the PagedTiffHandler 
tests.

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Mon, 17 May 2010 22:54:38 +0200, Markus Glaser wrote:

 Hi Dan,
 
 will provide a working example with no need to include any extensions in
 the course of this week. In the meantime, you might want to make sure
 that $wgSeleniumTiffTestUploads = false;
 in PagedTiffHandler_tests.php. Then, the test will not try to upload any
 of the pictures from libtiff. In order for the tests to succeed, you
 need to upload Multipage.tiff into the wiki. If there are any images
 missing, please let me know and I will send them to you. Actually, I
 didn't want to check in a third-party archive into the svn because of
 copyright considerations. The images seem to be public domain, but to
 me, it was not totally clear, whether they are. Are there any policies
 regarding this case? I assume, when there are more tests, especially
 with file uploads, the issue might arise again.
 
 Cheers,
 Markus

Thanks Markus,

$wgSeleniumTiffTestUploads does indeed equal false. I was failing on the 
upload of Multipage.tiff until I added 'tiff' to $wgFileExtensions. Now I 
am failing because allChecksOk is false. It appears this happens on line 
32 of PagedTiffHandler_tests.php on the statement:

if ($source != 'filetoc') $this-allChecksOk = false;

I'm not an image expert, so I don't know why this is happening.

Regards,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-17 Thread Dan Nessett
On Tue, 18 May 2010 01:04:01 +0200, Markus Glaser wrote:

 Hi Dan,
 
 the test fails at checking the prerequisites. It tries to load the image
 page and looks for a specific div element which is not present if the
 image was not uploaded correctly (id=filetoc). This might have changed
 across the versions of MediaWiki.
 
 Did you install the PagedTiffHandler extension? It depends on
 ImageMagick, so it might have rejected the upload. Although then it
 should have produced an error message ;) So the other question is, which
 MediaWiki version do you run the tests on?
 
 Regards,
 Markus

Hi Markus,

I am running on the latest version in trunk (1.17alpha r66296). There was 
no error when I uploaded the image. All of the extended details seem 
correct. I installed the extension. I don't have either exiv2 or vips 
installed, but according to the installation instructions these are 
optional.

Here are the configuration values I used:

# PagedTiffHandler extension
require_once($IP/extensions/PagedTiffHandler/PagedTiffHandler.php);

$wgTiffIdentifyRejectMessages = array(
'/^identify: Compression algorithm does not support random 
access/',
'/^identify: Old-style LZW codes, convert file/',
'/^identify: Sorry, requested compression method is not 
configured/',
'/^identify: ThunderDecode: Not enough data at scanline/',
'/^identify: .+?: Read error on strip/',
'/^identify: .+?: Can not read TIFF directory/',
'/^identify: Not a TIFF/',
);
$wgTiffIdentifyBypassMessages = array(
'/^identify: .*TIFFReadDirectory/',
'/^identify: .+?: unknown field with tag .+? encountered/'
);

$wgImageMagickIdentifyCommand = '/usr/bin/identify';
$wgTiffUseExiv = false;
$wgTiffUseVips = false;

// Maximum number of embedded files in tiff image
$wgTiffMaxEmbedFiles = 1;
// Maximum resolution of embedded images (product of width x height 
pixels)
$wgTiffMaxEmbedFileResolution = 2560; // max. Resolution 1600 x 1600 
pixels
// Maximum size of meta data
$wgTiffMaxMetaSize = 67108864; // 64kB

// TTL of Cacheentries for Errors
$wgTiffErrorCacheTTL = 84600;

Is there some way to use the wiki to look for the file property that is 
causing the problem?

Regards,

Dan

-- 
-- Dan Nessett


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Selenium testing framework

2010-05-15 Thread Dan Nessett
One of the URLs supplied by Ryan during the recent phone conference 
doesn't work. Specifically: http://
grid.tesla.usability.wikimedia.org:. I get the error: HTTP ERROR: 404
NOT_FOUND RequestURI=/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-24 Thread dan nessett
--- On Sun, 8/23/09, Aryeh Gregor simetrical+wikil...@gmail.com wrote:

 If they can run commands on the command line, then they can
 use
 environment variables.  If they can't, then your
 suggestion doesn't
 help.
 
  If there are administrators who can execute command
 lines, but cannot set environmental variables (e.g., they
 are confined to use a special shell)
 
 There aren't.  That would make no sense.

Thanks for clarifying the situation. Given this information I suggest changing 
all code in command line utilities of the form:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
$IP = dirname(__FILE__).'/../..';
}

to:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
echo Error. The environmental variable MW_INSTALL_PATH must be set to the root 
of the MW distribution. Exiting.\n;
die();
}

This would eliminate file position dependent code from the command line 
utilities, making them easier to maintain (i.e., they can be moved in the 
distribution without breaking them).

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-24 Thread dan nessett
--- On Mon, 8/24/09, Chad innocentkil...@gmail.com wrote:

 Why skip trying to find the location?
 If MW_INSTALL_PATH
 is already missing, what have we got to lose from trying
 to guess the location? The vast majority of people don't
 screw with the default structure, so it should be just
 fine.

That's a reasonable question, stating in another way the useful maxim, if it 
ain't broke, don't fix it. The problem is I think it's broke.

Here is my take on the pros/cons of leaving things unchanged:

Pros:

* Some administrators are used to simply typing the line php utility.php. 
Making them type:

MW_INSTALL_PATH=/var/wiki/mediawiki php utility.php

would be inconvenient.

In answer to this, for the MW installations running on unix, it is pretty 
simple to alias MW_INSTALL_PATH=/var/wiki/mediawiki php and put the 
definition into .bash_profile (or the appropriate shell initialization script). 
This is a one time effort and so the change isn't as onerous as it might seem. 
I assume there is a similar tactic available for windows systems.

Cons:

* The use of file position dependent code is a problem during development and 
much less of a problem during installation and production (as you suggest). 
Right now there are ~400 sub-directories in the extensions directory. It seems 
to me reorganization of the extensions directory would help understanding the 
relationship between individual extensions and the core. For example, having 
two subdirectories, one for cli utilities and another for hook based extensions 
would clarify the role each extension plays. However, currently there are 29 
extensions where $IP is set using the relative position of the file in the MW 
directory structure (a couple of other extensions set $IP based on 
MW_INSTALL_PATH). Reorganizing the directory structure has the potential of 
breaking them.

* CLI utilities are moved around for reasons other than a reorganization of the 
extensions directory. For example, as I understand it, DumpHTML was moved from 
maintenance/ to extensions/. dumpHTML.php sets $IP based on its relative 
position in the distribution tree. It was a happy coincidence that when it was 
moved, its relative position didn't change. However, it is unreasonable to 
think such reclassifications will always be as fortunate.

Since the cons outweigh the pros, I remain convinced that the change I 
suggested (using die()) improves the code.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-24 Thread dan nessett
--- On Mon, 8/24/09, Alex mrzmanw...@gmail.com wrote:

 I don't
 believe anyone
 except you has actually proposed restructuring the
 extensions directory.

Perhaps not. But, I don't see why that is relevant. I am making arguments why 
the extensions directory should be restructured. I may convince no one, but I 
don't think I should presume that.
 
 A) There
 aren't that many extensions that add command line utilities
 (several
 extensions also have scripts and hook based extensions so
 wouldn't
 neatly fit into such categories)

Here are the files in /extensions/ that reference /maintenance/command.inc. 
There are 65 of them (line number of the reference at the end). I don't know 
which of these are commonly used and therefore included in installation 
extension/ directories, but I assume all of them are used by at least a small 
number of sites (otherwise, why include them in the extensions directory at 
all?)

/extensions/AbuseFilter/install.php:8
/extensions/AbuseFilter/phpTest.php:8
/extensions/AdvancedSearch/populateCategorySearch.php:9
/extensions/AntiSpoof/batchAntiSpoof.php:6
/extensions/AntiSpoof/generateEquivset.php:4
/extensions/Babel/txt2cdb.php:9
/extensions/BoardVote/voterList.php:6
/extensions/CentralAuth/migratePass0.php:8
/extensions/CentralAuth/migratePass1.php:8
/extensions/CentralAuth/migrateStewards.php:3
/extensions/CentralNotice/rebuildLocalTemplates.php:3
/extensions/CentralNotice/rebuildTemplates.php:3
/extensions/CheckUser/importLog.php:4
/extensions/CheckUser/install.php:8
/extensions/cldr/rebuild.php:11
/extensions/CodeReview/svnImport.php:6
/extensions/CommunityVoice/CLI/Initialize.php:4
/extensions/Configure/findSettings.php:18
/extensions/Configure/manage.php:19
/extensions/Configure/migrateFiles.php:17
/extensions/Configure/migrateToDB.php:16
/extensions/Configure/writePHP.php:18
/extensions/DataCenter/CLI/Import.php:4
/extensions/DataCenter/CLI/Initialize.php:4
/extensions/DumpHTML/dumpHTML.php:61
/extensions/DumpHTML/wm-scripts/old/filterNamespaces.php:4
/extensions/DumpHTML/wm-scripts/queueController.php:6
/extensions/FlaggedRevs/maintenance/clearCachedText.php:13
/extensions/FlaggedRevs/maintenance/reviewAllPages.php:8
/extensions/FlaggedRevs/maintenance/updateAutoPromote.php:8
/extensions/FlaggedRevs/maintenance/updateLinks.php:10
/extensions/FlaggedRevs/maintenance/updateQueryCache.php:8
/extensions/FlaggedRevs/maintenance/updateStats.php:8
/extensions/LiquidThreads/compat/generateCompatibilityLocalisation.php:6
/extensions/LiquidThreads/import/import-parsed-discussions.php:4
/extensions/LiquidThreads/migrateDatabase.php:7
/extensions/LocalisationUpdate/update.php:7
/extensions/MetavidWiki/maintenance/download_from_archive_org.php:4
/extensions/MetavidWiki/maintenance/maintenance_util.inc.php:15
/extensions/MetavidWiki/maintenance/metavid2mvWiki.inc.php:16
/extensions/MetavidWiki/maintenance/metavid_gov_templates.php:2
/extensions/MetavidWiki/maintenance/mv_oneTime_fixes.php:2
/extensions/MetavidWiki/maintenance/mv_update.php:6
/extensions/MetavidWiki/maintenance/ogg_thumb_insert.php:15
/extensions/MetavidWiki/maintenance/scrape_and_insert.inc.php:12
/extensions/MetavidWiki/maintenance/transcode_to_flv.php:13
/extensions/MetavidWiki/maintenance/video_ocr_thumb_insert.php:15
/extensions/OAI/oaiUpdate.php:17
/extensions/ParserFunctions/testExpr.php:4
/extensions/SecurePoll/voterList.php:11
/extensions/SemanticMediaWiki/maintenance/SMW_conceptCache.php:18
/extensions/SemanticMediaWiki/maintenance/SMW_dumpRDF.php:34
/extensions/SemanticMediaWiki/maintenance/SMW_refreshData.php:41
/extensions/SemanticMediaWiki/maintenance/SMW_setup.php:46
/extensions/SemanticMediaWiki/maintenance/SMW_unifyProperties.php:27
/extensions/SemanticResultFormats/Ploticus/SRF_Ploticus_cleanCache.php:24
/extensions/SemanticTasks/ST_CheckForReminders.php:6
/extensions/SpamBlacklist/cleanup.php:9
/extensions/SwarmExport/swarmExport.php:23
/extensions/TitleKey/rebuildTitleKeys.php:3
/extensions/TorBlock/loadExitNodes.php:7
/extensions/TrustedXFF/generate.php:8
/extensions/UsabilityInitiative/PrefStats/populatePrefStats.php:9
/extensions/WikiAtHome/internalCmdLineEncoder.php:6
/extensions/WikiTrust/sql/create_db.php:74

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-23 Thread dan nessett
--- On Sun, 8/23/09, Andrew Garrett agarr...@wikimedia.org wrote:

 $ MW_INSTALL_PATH=/var/wiki/mediawiki php/maintenance/update.php

I don't understand the point you are making. If an MW administrator can set 
environmental variables, then, of course, what you suggests works. However, 
Brion mentions in his Tues, Aug 11, 10:09 email that not every MW installation 
admin can set environmental variables and Aryeh states in his Tues, Aug. 11, 
10:09am message that some MW administrators only have FTP access to the 
installations they manage. So, as I understand it some administrators cannot 
use the tactic you describe.

An important issue is whether these admins have access to command line 
utilities at all. If not, then the use of file position dependent code in 
command line utilities can be eliminated by substituting:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) die();

for (taken from dumpHTML.php):

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
$IP = dirname(__FILE__).'/../..';

This works if only admins who can set environmental variables can execute MW 
command line utilities.

If there are administrators who can execute command lines, but cannot set 
environmental variables (e.g., they are confined to use a special shell), then 
what I suggested in the previous email eliminates file position dependency. 
That is, the command line would be:

php -d include_path=include_path in php.ini:directory to MWInit.php 
utility.php

If an admin can execute php utility.php, he should be able to execute the 
prior command.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-23 Thread dan nessett
--- On Sun, 8/23/09, dan nessett dness...@yahoo.com wrote:

In my last email, I quoted Andrew Garret:

 $ MW_INSTALL_PATH=/var/wiki/mediawiki php/maintenance/update.php

This was incorrect. I fumbled some of the editing in my reply. What he proposed 
was:

 $ MW_INSTALL_PATH=/var/wiki/mediawiki php maintenance/update.php

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] CPRT feasibility

2009-08-20 Thread dan nessett
I am looking into the feasibility of writing a comprehensive parser regression 
test (CPRT). Before writing code, I thought I would try to get some idea of how 
well such a tool would perform and what gotchas might pop up. An easy first 
step is to run dump_HTML and capture some data and statistics.

I tried to run the version of dumpHTML in r54724, but it failed. So, I went 
back to 1.14 and ran that version against a small personal wiki database I 
have. I did this to get an idea of what structures dump_HTML produces and also 
to get some performance data with which to do projections of runtime/resource 
usage.

I ran dumpHTML twice using the same MW version and same database. I then diff'd 
the two directories produced. One would expect no differences, but that 
expectation is wrong. I got a bunch of diffs of the following form (I have put 
a newline between the two file names to shorten the line length):

diff -r 
HTML_Dump/articles/d/n/e/User~Dnessett_Bref_Examples_Example1_Chapter_1_4083.html
 
HTML_Dump2/articles/d/n/e/User~Dnessett_Bref_Examples_Example1_Chapter_1_4083.html
77,78c77,78
 Post-expand include size: 16145/2097152 bytes
 Template argument size: 12139/2097152 bytes
---
 Post-expand include size: 16235/2097152 bytes
 Template argument size: 12151/2097152 bytes

I looked at one of the html files to see where these differences appear. They 
occur in an html comment:

!-- 
NewPP limit report
Preprocessor node count: 1891/100
Post-expand include size: 16145/2097152 bytes
Template argument size: 12139/2097152 bytes
Expensive parser function count: 0/100
--

Does anyone have an idea of what this is for? Is there any way to configure MW 
so it isn't produced?

I will post some performance data later.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CPRT feasibility

2009-08-20 Thread dan nessett
--- On Thu, 8/20/09, Andrew Garrett agarr...@wikimedia.org wrote:

 As the title implies, it is a performance limit report. You
 can remove  
 it by changing the parser options passed to the parser.
 Look at the  
 ParserOptions and Parser classes.

Thanks. It appears dumpHTML has no command option to turn off this report (the 
parser option is mEnableLimitReport).

A question to the developer community: Is it better to change dumpHTML to 
accept a new option (to turn off Limit Reports) or copy dumpHTML into a new 
CPRT extension and change it. I strongly feel that having two extensions with 
essentially the same functionality is bad practice. On the other hand, changing 
dumpHTML means it becomes dual purposed, which has the potential of making it 
big and ugly. One compromise position is to attempt to factor dumpHTML so that 
a core provides common functionality to two different upper layers. However, I 
don't know if that is acceptable practice for extensions.

A short term fix is to pipe the output of dumpHTML through a filter that 
removes the Limit Report. That would allow developers to use dumpHTML (as a 
CPRT) fairly quickly to find and fix the known-to-fail parser bugs. The 
downside to this is it may significantly degrade performance.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-15 Thread dan nessett
--- On Fri, 8/14/09, Tim Starling tstarl...@wikimedia.org wrote:

 And please, spare us from your rant about how terrible this
 is. It's
 not PHP's fault that you don't know anything about it.

I'm sorry my questions make you angry. I don't recall ranting about PHP. 
Actually, I kind of like it. Lack of thread safety is an implementation problem 
not a problem with the language.

But, let's not dwell on your rant about my stupidity. Let's do something 
positive. You are an expert on the MW software and presumably PHP, Apache and 
MySQL. If you find it ridiculous that a newbie is driving the discussion about 
MW QA (I certainly do), pick up the ball and run with it. How would you fix the 
parser so all disabled tests in parserTests run? How would you build a test 
harness so developers can write unit tests for their bug fixes, feature 
additions and extensions? How would you integrate these unit tests into a good 
set of regression tests? How would you divide up the work so a set of 
developers can complete it in a reasonable amount of time? How do you propose 
achieving consensus on all of this?

On the other hand, maybe you would rather code than think strategically. Fine. 
Commit yourself to fixing the parser so all of the disabled tests run and also 
all or most of the pages on Wikipedia do not break and I will shut up about the 
CPRT. Commit yourself to creating a test harness that other developers can use 
to write unit tests and I will gladly stop writing emails about it. Commit 
yourself to develop the software the organizes the unit tests into a product 
regression test that developers can easily run and I will no longer bother you 
about MW QA.

My objective is a MW regression test suite that provides evidence that any 
extensions I write do not break the product. Once that objective is achieved, I 
will no longer infect your ears with dumb questions.

Dan 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] identifier collisions

2009-08-14 Thread dan nessett
One of the first problems to solve in developing the proposed CPRT is how to 
call a function with the same name in two different MW distributions. I can 
think of 3 ways: 1) use the Namespace facility of PHP 5.3, 2) use threads, or 
3) use separate process and IPC. Since MAMP supports none of these I am off 
building an AMP installation from scratch.

Some questions:

* Are there other ways to solve the identifier collision problem?

* Are some of the options I mention unsuitable for a MW CPRT, e.g., currently 
MW only assumes PHP 5.0 and requiring 5.3 may unacceptably constrain the user 
base.

* Is MW thread safe?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-14 Thread dan nessett
--- On Fri, 8/14/09, Dmitriy Sintsov ques...@rambler.ru wrote:

 I remember some time ago I was strongly discouraged to
 compile and run 
 PHP threaded MPM for apache because some functions or
 libraries of PHP 
 itself were not thread safe.

OK, this and Chad's comment suggests the option is multi-process/IPC. One more 
question:

* Can we assume PHP has PCNLT support or will the test require startup from a 
shell script?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
I'm starting a new thread because I noticed my news reader has glued together 
messages with the title A potential land mine and MW test infrastructure 
architecture, which may confuse someone coming into the discussion late. Also, 
the previous thread has branched into several topics and I want to concentrate 
on only one, specifically what can we assume about the system environment for a 
test infrastructure? These assumptions have direct impact on what test harness 
we use. Let me start by stating what I think can be assumed. Then people can 
tell me I am full of beans, add to the assumptions, subtract from them, etc.

The first thing I would assume is that a development system is less constrained 
than a production system in what can and what cannot be installed. For example, 
people shot down my proposal to automatically discover the MW root directory 
because some production systems have administrators without root access, 
without the ability to load code into the PEAR directory, etc. Fair enough 
(although minimizing the number of places where $IP is computed is still 
important). However, if you are doing MW development, then I think this 
assumption is too stringent. You need to run the tests in /tests/PHPUnitTests, 
which in at least one case requires the use of $wgDBadminuser and 
$wgDBadminpassword, something a non-priviledged user would not be allowed to do.

If a developer has more system privileges than a production admin, to what 
extent? Can we assume he has root access? If not, can we assume he can get 
someone who has to do things like install PHPUnit? Can we assume the 
availability of PERL or should we only assume PHP? Can we assume *AMP (e.g., 
LAMP, WAMP, MAMP, XAMP)? Can we assume PEAR? Can the developer install into 
PEAR?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Chad innocentkil...@gmail.com wrote:

 
 Tests should run in a vanilla install, with minimal
 dependency on
 external stuff. PHPUnit
 (or whatever framework we use) would be considered an
 acceptable dependency for
 test suites. If PHPUnit isn't available (ie: already
 installed and in
 the include path), then
 we should bail out nicely.
 
 In general, external dependencies should be used as
 seamlessly as
 possible, with minimal
 involvement from the end-user. A good example is
 wikidiff2...we load
 it if it exists, we fail
 back to PHP diff mode if we can't use it.

OK, graceful backout if an external dependency fails and minimal dependency on 
external stuff. So far we have two categories of proposal for test 
infrastructure : 1) build it all ourselves, and 2) use some external stuff. How 
do we decide which to do?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Problem with phpunit --skeleton

2009-08-12 Thread dan nessett
I have been playing around with phpunit, in particular its facility for 
generating tests from existing PHP code. You do this by processing a suitably 
annotated (using /* @assert ... */ comment lines) version of the file with 
phpunit --skeleton. Unfortunately, the --skeleton option assumes the file 
contains a class with the same name as the file.

For example, I tried annotating Hook.php and then processing it with phpunit 
--skeleton. It didn't work. phpunit reported:

Could not find class Hook in .../phase3/tests/Hook.php

(where I have replaced my path to MW with ...).

Since there are many MW files that do not contain classes of the same name as 
the file (or even classes at all), the --skeleton is probably not very useful 
for MW phpunit test generation.

You can still use phpunit by inserting the appropriate test code manually, but 
the objective of automatically generating tests with the same convenience as 
adding lines to parserTests.txt isn't available.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Brion Vibber br...@wikimedia.org wrote:

 
 The suggestions were for explicit manual configuration, not
 
 autodiscovery. Autodiscovery means *not* having to set
 anything. :)

I am insane to keep this going, but the proposal I made did not require doing 
anything manually (other than running the install script, which you have to do 
anyway). The install script knows (or can find out) where the MW root is 
located. It could then either: 1) rewrite php.ini to concatenate the location 
of MWInit.php at the end of include_path, or 2) plop MWInit.php into a 
directory already searched by PHP for includes/requires (e.g., the PEAR) 
directory.

I gave up on the proposal when people pointed out that MW admins may not have 
the privileges that would allow the install script to do either.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Roan Kattouw roan.katt...@gmail.com wrote:

 On shared hosting, both are impossible. MediaWiki currently
 works with
 minimal write access requirements (only the config/
 directory for the
 installer and the images/ directory if you want uploads),
 and we'd
 like to keep it that way for people who are condemned to
 shared
 hosting.

Which is why I wrote in the message that is the subject of your reply:

I gave up on the proposal when people pointed out that MW admins may not have 
the privileges that would allow the install script to do either.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
I am investigating how to write a comprehensive parser regression test. What I 
mean by this is something you wouldn't normally run frequently, but rather 
something that we could use to get past the known to fail tests now disabled. 
The problem is no one understands the parser well enough to have confidence 
that if you fix one of these tests that you will not break something else.

So, I thought, how about using the guts of DumpHTML to create a comprehensive 
parser regression test. The idea is to have two versions of phase3 + 
extensions, one without the change you make to the parser to fix a 
known-to-fail test (call this Base) and one with the change (call this 
Current). Modify DumpHTML to first visit a page through Base, saving the HTML 
then visit the same page through Current and compare the two results. Do this 
for every page in the database. If there are no differences, the change in 
Current works.

Sitting here I can see the eyeballs of various developers bulging from their 
faces. What? they say. If you ran this test on, for example, Wikipedia, it 
could take days to complete. Well, that is one of the things I want to find 
out. The key to making this test useful is getting the code in the loop 
(rendering the page twice and testing the results for equality) very efficient. 
I may not have the skills to do this, but I can at least develop an upper bound 
on the time it would take to run such a test.

A comprehensive parser regression test would be valuable for:

* fixing the known-to-fail tests.
* testing any new parser that some courageous developer decides to code.
* testing major releases before they are released.
* catching bugs that aren't found by the current parserTest tests.
* other things I haven't thought of.

Of course, you wouldn't run this thing nightly or, perhaps, even weekly. Maybe 
once a month would be enough to ensure the parser hasn't regressed out of sight.



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] More fun and games with file position relative code

2009-08-12 Thread dan nessett
So. I checked out a copy of phase3 and extensions to start working on 
investigating the feasibility of a comprehensive parser regression test. After 
getting the working copy downloaded, I do what I usually do - blow away the 
extensions directory stub that comes with phase3 and soft link the downloaded 
copy of extensions in its place. I then began familiarizing myself with 
DumpHTML by starting it up in a debugger. Guess what happened.

If fell over. Why? Because DumpHTML is yet another software module that 
computes the value $IP. So what? Well, DumpHTML.php is located in 
../extensions/DumpHTML. At line 57-59 it executes:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
$IP = dirname(__FILE__).'/../..';
}

This works on a deployed version of MW, since the extensions directory is 
embedded in /phase3. But, in a development version, where /extensions is a 
separate subdirectory, ./../.. does not get you to phase3, it gets you to MW 
root. So, when you execute the next line:

require_once( $IP./maintenance/commandLine.inc );

DumpHTML fails.

Of course, since I am going to change DumpHTML anyway, I can move it to 
/phase3/maintenance and change the '/../..' to '/..' and get on with it. But, 
for someone attempting to fix bugs in DumpHTML, the code that uses a knowledge 
of where DumpHTML.php is in the distribution tree is an issue.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Roan Kattouw roan.katt...@gmail.com wrote:

 I read this paragraph first, then read the paragraph above
 and
 couldn't help saying WHAT?!?. Using a huge set of pages
 is a poor
 replacement for decent tests.

I am not proposing that the CPRT be a substitute for decent tests. We still 
need a a good set of tests for the whole MW product (not just the parser). Nor 
would I recommend making a change to the parser and then immediately running 
the CPRT. Any developer that isn't masochistic would first run the existing 
parserTests and ensure it passes. Then, you probably want to run the modified 
DumpHTML against a small random selection of pages in the WP DB. Only if it 
passes those tests would you then run the CPRT for final assurance. 

The CPRT I am proposing is about as good a test of the parser that I can think 
of. If a change to the parser passes it using the Wikipedia database (currently 
5 GB), then I would say for all practical purposes the changes made to the 
parser do not regress it.

 Also, how would you handle
 intentional
 changes to the parser output, especially when they're
 non-trivial?

I don't understand this point. Would you elaborate?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] More fun and games with file position relative code

2009-08-12 Thread dan nessett
Chad innocentkil...@gmail.com wrote:
 
 DumpHTML will not be moved back to maintenance in the repo, it was
 already removed from maintenance and made into an extension. Issues
 with it as an extension should be fixed, but it should not be encouraged
 to go back into core.

What I meant was I can move the code in DumpHTML as CPRT to maintanence in my 
working copy and work on it there. Whether this code is simply a MacGyver test 
or something else is completely up in the air.

 Also, on a meta notecan you possibly confine all of your testing comments
 to a single thread? We don't need a new thread for each problem you find :)
 

My apologies.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Mon, 8/10/09, Tim Starling tstarl...@wikimedia.org wrote:

 No, the reason is because LocalSettings.php is in the
 directory
 pointed to by $IP, so you have to work out what $IP is
 before you can
 include it.
 
 Web entry points need to locate WebStart.php, and command
 line scripts
 need to locate maintenance/commandLine.inc. Then either of
 those two
 entry scripts can locate the rest of MediaWiki.

Fair enough, but consider the following.

I did a global search over the phase3 directory and got these hits for the 
string $IP = :

.../phase3/config/index.php:30:  $IP = dirname( dirname( __FILE__ ) );
.../phase3/config/index.php:1876:   \$IP = MW_INSTALL_PATH;
.../phase3/config/index.php:1878:   \$IP = dirname( __FILE__ );
.../phase3/includes/WebStart.php:61:  $IP = getenv( 'MW_INSTALL_PATH' );
.../phase3/includes/WebStart.php:63:$IP = realpath( '.' );
.../phase3/js2/mwEmbed/php/noMediaWikiConfig.php:11:  $IP = 
realpath(dirname(__FILE__).'/../');
.../phase3/LocalSettings.php:17:$IP = MW_INSTALL_PATH;
.../phase3/LocalSettings.php:19:$IP = dirname( __FILE__ );
.../phase3/maintenance/language/validate.php:16:  $IP = dirname( __FILE__ ) . 
'/../..';
.../phase3/maintenance/Maintenance.php:336: $IP = strval( 
getenv('MW_INSTALL_PATH') ) !== ''

So, it appears that $IP computation is occurring in 6 files. In addition, $IP 
is adjusted by the relative place of the file in the MW source tree (e.g., in 
validate.php, $IP is set to dirname( __FILE__ ) . '/../..';) Adjusting paths 
according to where a file exists in a source tree is fraught with danger. If 
you ever move the file for some reason, the code breaks.

Why not isolate at least $IP computation in a single function? (Perhaps 
breaking up LocalSettings.php into two parts is overkill, but certainly 
cleaning up $IP computation isn't too radical an idea.) Of course, there is the 
problem of locating the file of the function that does this. One approach is to 
recognize that php.ini already requires potential modification for MW use. 
Specifically, the path to PEAR must occur in 'include_path'. It would be a 
simple matter to add another search directory for locating the initialization 
code.

Or maybe there is a better way of locating MW initialization code. How its done 
is an open issue. I am simply arguing that computing the value of $IP by 
relying on the position of the php file in a source tree is not good software 
architecture. Experience shows that this kind of thing almost always leads to 
bugs.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad innocentkil...@gmail.com wrote:

 
 The problem with putting it in a single function is you
 still have
 to find where that function is to begin with (I'd assume
 either
 GlobalFunctions or install-utils would define this). At
 which point
 you're back to the original problem: defining $IP yourself
 so you
 can find this.
 
 Yes, we should probably do this all a little more cleanly
 (at least
 one unified style would be nice), but constructing it
 manually is
 pretty much a given for anything trying to find an entry
 point, as
 Tim points out.

I'm probably missing something since I have only been programming in PHP for 
about 4 weeks, but if you set include_path in php.ini so it includes the root 
of the MW tree, put a php file at that level that has a function (or a method 
in a class) that returns the MW root path, wouldn't that work? For example, if 
you modified include_path in php.ini to include pathname to MW root, added 
the file MWInit.php to the MW root directory and in MWInit.php put a function 
MWInit() that computes and returns $IP, wouldn't that eliminate the necessity 
of manually figuring out the value of $IP [each place where you now compute $IP 
could require_once('MWInit.php') and call MWInit()]?

Of course, it may be considered dangerous for the MW installation software to 
fool around with php.ini. But, even if you require setting the MW root manually 
in php.ini::include_path (abusing the php namespace disambiguation operator 
here) that would be an improvement. You should only have to do this once and 
could upgrade MW without disturbing this binding.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
Brion Vibber br...@wikimedia.org wrote:

 Unless there's some reason to do otherwise, I'd recommend dropping the 
 $IP from the autogen'd LocalSettings.php and pulling in 
 DefaultSettings.php from the level above. (Keeping in mind that we 
 should retain compat with existing LocalSettings.php files that are 
 still pulling it.)

Better, but what about /config/index.php, noMediaWikiConfig.php, validate.php 
and Maintenance.php? Having only two different places where $IP is computed is 
a definite improvement (assuming you fix the 4 files just mentioned), but it 
still means the code in WebStart.php and Command.inc is file position 
dependent. If this is the best that can be done, then that is that. However, it 
would really be better if all position dependent code could be eliminated.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Aryeh Gregor simetrical+wikil...@gmail.com wrote:

 Then you're doing almost exactly the same thing we're doing
 now,
 except with MWInit.php instead of LocalSettings.php. 
 $IP is normally
 set in LocalSettings.php for most page views.  Some
 places still must
 figure it out independently in either case, e.g.,
 config/index.php.
 

I want to avoid seeming obsessed by this issue, but file position dependent 
code is a significant generator of bugs in other software. The difference 
between MWInit.php and LocalSettings.php is if you get the former into a 
directory that PHP uses for includes, you have a way of getting the root path 
of MW without the caller knowing anything about the relative structure of the 
code distribution tree hierarchy. As you pointed out previously, the reason you 
need to compute $IP before including/requiring LocalSettings is you don't know 
where it is.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Brion Vibber br...@wikimedia.org wrote:

 I'm not sure there's a compelling reason to even have $IP
 set in 
 LocalSettings.php anymore; the base include path should
 probably be 
 autodetected in all cases, which is already being done in
 WebStart.php 
 and commandLine.inc, the web and CLI initialization
 includes based on 
 their locations in the file tree.

I started this thread because two of the fixes in the patch for bug ticket 
20112 (those for Database.t and Global.t) move the require of LocalSettings.php 
before the require of AutoLoader.php. This is necessary because AutoLoader.php 
eventually executes: 
require_once($IP/js2/mwEmbed/php/jsAutoloadLocalClasses.php).

This is a perfect example of how file position dependent code can introduce 
bugs. If $IP computation is eliminated from LocalSettings.php, then both of 
these tests will once again fail. The tests in phase3/t/inc are not executed as 
the result of a web request or through a command line execution path that 
includes maintenance/Command.inc. They normally are executed by typing at a 
terminal: prove t/inc -r or, e.g., prove t/inc/Global.t. prove is a TAP 
protocol consumer that digests and displays the results of the tests, which are 
TAP protocol producers.

So, eliminating $IP computation from LocalSettings would require the 
development of new code for these tests. That would mean there would be 4 
places where $IP is computed: WebStart.php, Command.inc, /config/index.php and 
the t test place. Not good.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Brion Vibber br...@wikimedia.org wrote:

 These scripts should simply be updated to initialize the
 framework 
 properly instead of trying to half-ass it and load
 individual classes.

I agree, which is why I am trying to figure out how to consolidate the tests in 
/tests/ and /t/. [The example I gave was to illustrate how bugs can pop up when 
you use code that depends on the position of files in a distribution tree, not 
because I think the tests are in good shape. The bug fixes are only intended to 
make these tests available again, not to declare them finished.]

I could use some help on test system architecture - you do wear the systems 
architect hat :-). It doesn't seem right to use WebStart.php to initialize the 
tests. For one thing, WebStart starts up profiling, which doesn't seem relevant 
for a test. That leaves Command.inc. However, the t tests stream TAP protocol 
text to prove, a PERL script that normally runs them. I have no way of 
running these tests through prove because my IDE doesn't support PERL, so if I 
changed the tests to require Command.inc, it would be hard to debug any 
problems.

I researched other TAP consumers and didn't find anything other than prove. I 
was hoping that one written in PHP existed, but I haven't found anything. So, I 
am in kind of a bind. We could just dump the t tests, but at least one 
(Parser.t, which runs parserTests) is pretty useful. Furthermore, TAP has an 
IETF standardization effort and phpunit can produce TAP output. This suggests 
that TAP is a good candidate for test system infrastructure.

So, what are your thoughts on this?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, lee worden won...@riseup.net wrote:

 Placing it in the include path could make it hard to run
 more than one version of the MW code on the same server,
 since both would probably find the same file and one of them
 would likely end up using the other one's $IP.

That is a good point. However, I don't think it is insurmountable. Callers to 
MWInit() could pass their path (which they can get calling realpath( '.' )). 
In a multi-MW environment MWInit() could disambiguate the root path by 
searching the provided path against those of all installed root paths.

 Another way of putting it is, is it really better to
 hard-code the absolute position of the MW root rather than
 its position relative to the files in it?

Well, I think so. Hardcoding the absolute position of the MW root occurs at 
install time. Using file position dependent code is a development time 
dependency. Files are not moved around once installed, but could be moved 
around during the development process. So, bugs that depend on file position 
are normally not caused by installation activity, but rather by development 
activity.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Trevor Parscal tpars...@wikimedia.org wrote:

Not to worry. I've given up on this issue, at least for the moment.

Dan

 What seems to be being discussed here are particular
 offensive areas of 
 MediaWiki - however if you really get to know MediaWiki you
 will likely 
 find tons of these things everywhere... So are we proposing
 a specific 
 change that will provide a solution to a problem or just
 nit-picking?
 
 I ask cause I'm wondering if I should ignore this thread or
 not (an 
 others are probably wondering the same) - and I'm sort of
 feeling like 
 this is becoming one of those threads where the people
 discussing things 
 spend more time and effort battling each other than just
 fixing the code.



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad innocentkil...@gmail.com wrote:

 To be perfectly honest, I'm of the opinion that tests/ and
 t/
 should be scrapped and it should all be done over,
 properly.
 
 What we need is an easy and straightforward way to write
 test cases, so people are encouraged to write them. Right
 now, nobody understands wtf is going on in tests/ and t/,
 so
 they get ignored and the /vast/ majority of the code isn't
 tested.
 
 What we need is something similar to parser tests, where
 it's
 absurdly easy to pop new tests in with little to no coding
 required at all. Also, extensions having the ability to
 inject
 their own tests into the framework is a must IMHO.

There is a way to easily add tests, but it requires some community discipline. 
phpunit has a --skeleton command (actually two variations of it) that 
automatically generates unit tests. (see 
http://www.phpunit.de/manual/current/en/skeleton-generator.html). All 
developers have to do is add assertions (which have the appearance of comments) 
to their code and call phpunit with the --skeleton flag. If you want even more 
hand holding, Netbeans will do it for you.

This is all wonderful, but there are problems:

* Who is going to go back and create all of the assertions for existing code? 
Not me (at least not alone). This is too big a job for one person. In order for 
this to work, you need buy in from at least a reasonable number of developers. 
So far, the number of developers expressing an interest in code quality and 
testing is pretty small.

* What motivation is there for those creating new code to do the work to add 
assertions with good code coverage? So far I haven't seen anything in the MW 
code development process that would encourage a developer to do this. Without 
some carrots (and maybe a few sticks) this approach has failure written all 
over it.

* Even if we get a bunch of Unit tests, how are they integrated into a useful 
whole? That requires some decisions on test infrastructure. This thread begins 
the discussion on that, but it really needs a lot more.

* MW has a culture problem. Up to this point people just sling code into trunk 
and think they are done. As far as I can tell, very few feel they have any 
responsibility for ensuring their code won't break the product. [Perhaps I am 
being unkind on this. Without any testing tools available, it is quite possible 
that developers want to ensure the quality of their code, but don't have the 
means of doing so.]

I realize these observations may make me unpopular. However, someone has to 
make them. If everyone just gets mad, it doesn't solve the problem. It just 
pushes it out to a time when it is even more serious.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Alexandre Emsenhuber alex.emsenhu...@bluewin.ch wrote:

 +1, we could maybe write our own test system that can be
 based on the  
 new Maintenance class, since we already some test scripts
 in / 
 maintenance/ (cdb-test.php, fuzz-tester.php,
 parserTests.php,  
 preprocessorFuzzTest.php and syntaxChecker.php). Porting
 tests such as  
 parser to PHPUnit is a pain, since it has no native way to
 write a  
 test suite that has a unknow number of tests to run.

Rewriting parserTests as php unit tests would be a horrible waste of time. 
parserTests works and it provides a reasonable service. One problem, however, 
is how do we fix the parser? It seems it is a pretty complex code system (when 
I ran a MacGyver test on parserTests, 141 files were accessed, most of which 
are associated with the parser). I have been thinking about this, but those 
thoughts are not yet sufficiently clear to make public yet.

On the other hand, taking the parserTests route and doing all of our own test 
infrastructure would also be a good deal of work. There are tools out there 
(phpuint and prove) that are useful. In my view creating a test infrastructure 
from scratch would unnecessarily waste time and resources.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
Brion Vibber br...@wikimedia.org wrote:
 
 Starting about a week ago, parser test results are now included in code 
 review listings for development trunk:
 
 http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki/pathpath=%2
 Ftrunk%2Fphase3
 
 Regressions are now quickly noted and fixed up within a few revisions -- 
 something which didn't happen when they were only being run manually by 
 a few folks here and there.
 
 Is this the sort of thing you're thinking of?
 
 -- brion

Yes. Absolutely. Sight is critical for action and running parserTests on each 
revision in the development trunk is a good first step on improving code 
quality.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Robert Leverington rob...@rhl.me.uk wrote:

 Please can you properly break your lines in e-mail though,
 to 73(?)
 characters per a line - should be possible to specify this
 in your
 client.

I'm using the web interface provided by yahoo. If you can
point me in the right direction for setting up yahoo to do
this I'll be happy to (I've done this manually on this
message).

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Alexandre Emsenhuber alex.emsenhu...@bluewin.ch wrote:

 My idea is the move the backend of ParserTest
 (parserTests.txt file  
 processing, result reporting, ...) and the TestRecorder
 stuff to  
 something like a MediaWikiTests class that extends
 Maintenance and  
 move the rest in a file in /maintenance/tests/ (to be
 created) and re- 
 use the backend to have files that have the same format,
 but test's  
 input could be raw PHP code (a bit like PHP core's tests)
 with a new  
 config variable that's like $wgParserTestFiles but for
 these kind of  
 test. This mostly concerns the actual tests in /tests/ and
 /t/inc/).  
 We can also port cdb-test.php, fuzz-tester.php,  
 preprocessorFuzzTest.php and syntaxChecker.php to this new
 system and  
 then create a script in /maintenance/ that runs all the
 tests in / 
 maintenance/tests/. This allows to also upload all the
 tests to  
 CodeReview, not only the parser tests. A benefit is that we
 can get  
 ride of /tests/ and /t/.

One of the beauties of open source code development is he who does the work 
wins the prize. Of course, I am sure senior developers have discretionary power 
on what goes into a release and what does not. But, if you want to do the work, 
go for it (says the guy [me] who just joined the group).

However, I think you should consider the following:

* parserTests is designed to test parsing, which is predominantly a text 
manipulation task. Other parts of MW do not necessarily provide text processing 
markers that can be used to decide whether they are working correctly or not.

* Sometimes testing the action of a module requires determining whether a 
series of actions provide the correct behavior. As far as I am aware, 
parserTests has no facility to tie together a set of actions into a single test.

For example, consider two MW files in phase3/includes: 1) AutoLoader.php and 2) 
Hooks.php. In AutoLoader, the method loadAllExtensions() loads all extensions 
specified in $wgAutoloadClasses. It takes no parameters and has no return 
value. It simply walks through the entries specified in $wgAutoloadClasses and 
if the class specified as the key exists, executes a require of the file 
specified in the value. I don't see how you would specify a test of this method 
using the syntax of parserTests.txt.

In Hooks.php, there is a single function wfRunHooks(). It looks up hooks 
previously set and calls user code for them. It throws exceptions in certain 
error conditions and testing it requires setting a hook and seeing if it is 
called appropriately. I don't see how you could describe this behavior with 
parserTests.txt syntax.

Of course, you could create new syntax and behavior for the parserTests 
software components, but that is a lot of work that other infrastructure has 
already done. For example, see the set of assertions for phpunit 
(http://www.phpunit.de/manual/2.3/en/api.html#api.assert.tables.assertions).

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad innocentkil...@gmail.com wrote:

  Neither of these need to be tested directly.  If
 AutoLoader breaks,
  then some other class won't load, and the tests for
 that class will
  fail.  If wfRunHooks() fails, then some hook won't
 work, and any test
  of that hook will fail.
 

I will pass on commenting about these for the moment because:

  I think what's needed for decent test usage for
 MediaWiki is:
 
  1) Some test suite is picked.  PHPUnit is probably
 fine, if it runs
  out of the box and doesn't need some extra module to
 be installed.
 
  2) The test suite is integrated into CodeReview with
 nag e-mails for
  broken tests.
 
  3) A moderate number of tests are written for the test
 suite.
  Existing parser tests could be autoconverted,
 possibly.  Maybe someone
  paid could be assigned to spend a day or two on this.
 
  4) A new policy requires that everyone write tests for
 all their bug
  fixes and enhancements.  Commits that don't add
 enough tests will be
  flagged as fixme, and reverted if not fixed.
 
  (4) is critical here.

All good stuff, especially (4) - applause :-D.

  While we're at it, it would be nice if we instituted
 some other
  iron-clad policies.  Here's a proposal:
 
  * All new functions (including private helper
 functions, functions in
  JavaScript includes, whatever) must have
 function-level documentation
  that explains the purpose of the function and
 describes its
  parameters.  The documentation must be enough that no
 MediaWiki
  developer should ever have to read the function's
 source code to be
  able to use it correctly.  Exception: if a method is
 overridden which
  is already documented in the base class, it doesn't
 need to be
  documented again in the derived class, since the
 semantics should be
  the same.
  * All new classes must have high-level documentation
 that explains
  their purpose and structure, and how you should use
 them.  The
  documentation must be sufficient that any MediaWiki
 developer could
  understand why they might want to use the class in
 another file, and
  how they could do so, without reading any of the
 source code.  Of
  course, developers would still have to read the
 function documentation
  to learn how to use specific functions.  There are no
 exceptions, but
  a derived class might only need very brief
 documentation.
  * All new config variables must have documentation
 explaining what
  they do in terms understandable to end-users.  They
 must describe what
  values are accepted, and if the values are complicated
 (like
  associative arrays), must provide at least one example
 that can be
  copy-pasted.  There are no exceptions.
  * If any code is changed so as to make a comment
 incorrect, the
  comment must be updated to match.  There are no
 exceptions.
 
  Or whatever.  We have *way* too few high-level
 comments in our code.
  We have entire files -- added quite recently, mind
 you, by established
  developers -- that have no or almost no documentation
 on either the
  class or function level.  We can really do better
 than this!  If we
  had a clear list of requirements for comments in new
 code, we could
  start fixme'ing commits that don't have adequate
 comments.  I think
  that would be enough to get people to add sufficient
 comments, for the
  most part.  Without clear rules, though, backed up by
 the threat of
  reverting, I don't think the situation will improve
 here.

Wonderful stuff - more applause.

  (Wow, this kind of turned into a thread hijack.  :D)
 

Who cares. It needs to be said.

 
 On the whole new code front. Can we all /please/ remember
 that
 we're writing PHP5 here. Visibility on all new functions
 and variables
 should also be a must.
 

OK, I must admit I didn't understand that, probably because I'm new
to PHP. Can you make this more explicit?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] A potential land mine

2009-08-10 Thread dan nessett
For various reasons I have noticed that several files independently compute the 
value of $IP. For example, maintenance/Command.inc and includes/WebStart.php 
both calculate its value. One would expect this value to be computed in one 
place only and used globally. The logical place is LocalSettings.php.

Sprinkling the computation of $IP all over the place is just looking for 
trouble. At some point the code used to make this computation may diverge and 
you will have bugs introduced. My first reaction to this problem was to wonder 
why these files didn't just require LocalSettings.php. However, since it is a 
fairly complex file doing so might not be desirable because: 1) there are 
values in LocalSettings.php that would interfere with values in these files, 2) 
there is some ordering problem that might occur, or 3) there are performance 
considerations.

If it isn't possible and or desirable to replace the distributed computation of 
$IP with require_once('LocalSettings.php'), then I suggest breaking 
LocalSettings into two parts, say LocalSettingsCore.php and 
LocalSettingsNonCore.php (I am sure someone can come up with better names). 
LocalSettingsCore.php would contain only those calculations and definitions 
that do not interfere with the core MW files. LocalSettingsNonCore.php would 
contain everything else now in LocalSettings.php. Obviously, the first 
candidate for inclusion in LocalSettingsCore.php is the computation of $IP. 
Once such a separation is carried out, files like maintenance/Command.inc and 
includes/WebStart.php can require_once('LocalSettingsCore.php') instead of 
independently computing $IP.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  1   2   >