Re: What hurts you the most in Perl?

2010-12-01 Thread Fergal Daly
2010/12/1 Jason Purdy ja...@journalistic.com:
 To add my five cents, the thing that hurts me the most is that Perl is not
 an accepted language when it comes to the differnet new platforms.

 Our work has adopted Drupal as a CMS and it's written in PHP. It would be
 awesome if it was written in Perl, but as someone else posted in this
 thread, we can pick up languages pretty easily (better than foreign
 languages, no? ;)) and be productive in a few weeks.

 I'm also attracted to the new Android and iPad platforms, but there's no
 Perl there, either.

Veering off-topic briefly.

Perl is available through the android scripting engine

http://code.google.com/p/android-scripting/

although only Java has first-class support with access to all the GUI
and other stuff. You can run command-line perl no problem so you can
script fetching things to your phone etc. You could also run a server
in Perl and interact with it through the browser (I know of at least
one python app that does this for android),

F

 There's no Perl when it comes to creating client-side web applications
 (using JavaScript).

 IMHO, Perl is getting relegated to server-side/backend applications and when
 more power is getting brought to the front, it's losing mindshare/focus.

 - Jason

 http://use.perl.org/~Purdy/journal/31280

 On 11/24/2010 07:01 AM, Gabor Szabo wrote:

 The other day I was at a client that uses Perl in part of their system and
 we
 talked a bit about the language and how we try to promote it at various
 events.

 Their Perl person then told me he would not use Perl now for a large
 application because:

 1) Threads do not work well - they are better in Python and in Java.

 2) Using signals and signal handlers regularly crashes perl.

 3) He also mentioned that he thinks the OO system of Perl is a hack -
     that the objects are hash refs and there is no privacy.

 So I wonder what hurts *you* the most in Perl?

 Gabor

 --
 Gabor Szabo                     http://szabgab.com/
 Perl Ecosystem Group       http://perl-ecosystem.org/




Re: Writing tests

2009-12-13 Thread Fergal Daly
2009/12/13 Rene Schickbauer rene.schickba...@gmail.com:
 Hi!

 I'm currently writing some tests for my Maplat framework.

 Except for really simple tests, having PostgreSQL server and memcached
 installed is quite essential (both cam be started as temporary instances if
 required as long as the binaries are available).

 What is the reasonable response if one or both of them are not available
 during make test?

 *) FAIL the tests?

If you do this you'll just get spammed to bits.

 *) SKIP the tests?

Maybe but see below.

 *) DIAG(Warning) and skip the tests?

Skip comes with a reason. If you want to give more detail then diag's are fine.

 In my case, skipping the tests will probably exclude  80% of the
 functionality, so what do i do? I probably can't just assume every
 cpantester has postgresql and memcached installed, can i?

It might be good to factor out all of the independent stuff into its
own module(s) if that makes sense, so that gets widely tested

Basically you want to avoid the tests being run on systems where they
are doomed to fail. You can do that either by

- refusing to install (a bad idea, e.g. pgsl may be installed after your module)
- reducing your dependency by making things work with one of the
lighter in-memory or testing-oriented SQL DBMSs (I think there is at
least 1 pure perl one) and then have that as a prereq for the tests
- reducing your dependency by using a mock database module that is set
up just to respond to the test queries
- skipping them on such systems - I've had big arguments along these
lines before, I think that declaring a pass having skipped important
tests due to unsatisfied deps is a bad idea. Users expect a pass to
mean a pass and will probably not even noticed skips whizzing past
during an automated install. Ideally tests should only be skipped when
they are irrelevant - e.g. windows only functions on a linux install.
Skipping them for code that _will_ be called but can't be tested right
now is worse than not testing that code at all - the user is left with
a false-confidence in the module.

- a final odd idea  - if you can detect that you are running under
cpan tester  (not sure if this is possible), you can dynamically add a
dependency to a sacrifical postgres_installed module - this module's
tests always fail if postgres is not available. You will cpantesters
spam about it but you can just /dev/null that. For testers that have
postgresql it will pass and install and then your real module will run
its full test suite,

F

 LG
 Rene

 --
 #!/usr/bin/perl # 99 bottles Wikipedia Edition
 $c=99;do{print $c articles of wikipedia on the net,\n$c articles of wiki.
 pedia,\ndiscuss one at length, delete it at will,\n.--$c. articles of .
 wikipedia on the net.\n\n;}while($c);printSo long  thx for the fish\n;



Re: Exporter::Safe?

2008-06-21 Thread Fergal Daly
2008/6/20 Hans Dieter Pearcey [EMAIL PROTECTED]:
 On Fri, Jun 20, 2008 at 04:19:41PM +0100, Fergal Daly wrote:
 To be a little more constructive. Here's something that is
 implementable and I think reasonable.

 use Foo::Bar;

 never does anything to the importing package

 use Foo::Bar as = Bar;

 plops a constant function Bar into your package. The constant is an
 object on which you can do

 Bar-some_function(@args)

 and it is the equivalent of calling

 Foo::Bar::some_function(@args)

 In my TODO is an entry for implementing this for Sub::Exporter.

 You don't even need to use AUTOLOAD:

 * create Some::Long::Generated::Package

If you're going to use a generated package then you can use AUTOLOAD
just for the first call and oyu can put AUTOLOAD into a base class.

 * import symbols from Foo::Bar into SLGP, wrapping each with
  sub { shift; $original_code-(@_) }

I would

sub { shift;  goto $oringal_code }

so the whole thing is entirely transparent from a stack point of view.

 * export Bar() into the calling package
  sub Bar () { Some::Long::Generated::Package }

Glad I'm not on my own wanting this :)

F

 hdp.



Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
Hmm. I seem to have misunderstood your problem. The stuff below
remains true but to be relevant to you mail, should include stuff
about subclasses. The principle is that same, changing what you export
based on something other than what the importer is requesting will
cause mysterious breakage,

F

2008/6/20 Fergal Daly [EMAIL PROTECTED]:
 2008/6/20 Ovid [EMAIL PROTECTED]:
 Buried deep within some code, someone used a module (Test::Most 0.03)
 which exports a 'set' function.  They weren't actually using that
 module.  It was just leftover cruft.  Unfortunately, the parent class
 of that module inherited from Class::Accessor.

 Test::Most exports 'set' and Class::Accessor calls a 'set' method.
 Oops.

 I'm trying to think of the best way to deal with this.  My first
 thought is to create a drop in replacement for Exporter which will not
 export a function if caller-can($function) *unless* the person
 explicitly lists it in the import list with a unary plus:

 # 2008
 use Foo; # exports nothing
 use Bar; # exports set with Exporter::Safe

 set() # Bar

 # 2009 after upgrading some modules
 use Foo; # new version in 2009 exports set
 use Bar; # exports set with Exporter::Safe

 set() # now Foo and triggers rm -rf / :)


 Of course switching the order of imports gives the problems without
 Exporter::Safe.

 The upshot is that I believe there is no such thing as safe default
 exports. Python gets this right with

 import Foo
 import Bar

 Bar.set() # always works no matter what Foo suddenly starts doing.

 It deals with long package names by doing

 from Stupid.Long.Package import Name
 Name.Foo

 So, what would be interesting would be to find a way to bring the
 short name in my current namespace ebenefits of Python to Perl and
 abandon default exports entirely,

 F

  use Test::Most plan = 3, '+set';

 Are there better strategies?

 Cheers,
 Ovid

 --
 Buy the book  - http://www.oreilly.com/catalog/perlhks/
 Personal blog - http://publius-ovidius.livejournal.com/
 Tech blog - http://use.perl.org/~Ovid/journal/




Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
To be a little more constructive. Here's something that is
implementable and I think reasonable.

use Foo::Bar;

never does anything to the importing package

use Foo::Bar as = Bar;

plops a constant function Bar into your package. The constant is an
object on which you can do

Bar-some_function(@args)

and it is the equivalent of calling

Foo::Bar::some_function(@args)

Yes it would be slower as it would have to go through AUTOLOAD and
method called but whether that's a problem depends on whether you
value CPU cycles more than brain cycles.

Since I'm in maintenance-only mode for perl, these days I'm not
actually going to implement this. Most of my coding is in python now
and I miss plenty about Perl but not imports, exports and
really::long::symbol::names::that::have::to::replace::everywhere::if::you::drop::in::a::different::module::with::the::same::interface,

F

2008/6/20 Fergal Daly [EMAIL PROTECTED]:
 2008/6/20 Ovid [EMAIL PROTECTED]:
 Buried deep within some code, someone used a module (Test::Most 0.03)
 which exports a 'set' function.  They weren't actually using that
 module.  It was just leftover cruft.  Unfortunately, the parent class
 of that module inherited from Class::Accessor.

 Test::Most exports 'set' and Class::Accessor calls a 'set' method.
 Oops.

 I'm trying to think of the best way to deal with this.  My first
 thought is to create a drop in replacement for Exporter which will not
 export a function if caller-can($function) *unless* the person
 explicitly lists it in the import list with a unary plus:

 # 2008
 use Foo; # exports nothing
 use Bar; # exports set with Exporter::Safe

 set() # Bar

 # 2009 after upgrading some modules
 use Foo; # new version in 2009 exports set
 use Bar; # exports set with Exporter::Safe

 set() # now Foo and triggers rm -rf / :)


 Of course switching the order of imports gives the problems without
 Exporter::Safe.

 The upshot is that I believe there is no such thing as safe default
 exports. Python gets this right with

 import Foo
 import Bar

 Bar.set() # always works no matter what Foo suddenly starts doing.

 It deals with long package names by doing

 from Stupid.Long.Package import Name
 Name.Foo

 So, what would be interesting would be to find a way to bring the
 short name in my current namespace ebenefits of Python to Perl and
 abandon default exports entirely,

 F

  use Test::Most plan = 3, '+set';

 Are there better strategies?

 Cheers,
 Ovid

 --
 Buy the book  - http://www.oreilly.com/catalog/perlhks/
 Personal blog - http://publius-ovidius.livejournal.com/
 Tech blog - http://use.perl.org/~Ovid/journal/




Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
2008/6/20 Ovid [EMAIL PROTECTED]:
 Buried deep within some code, someone used a module (Test::Most 0.03)
 which exports a 'set' function.  They weren't actually using that
 module.  It was just leftover cruft.  Unfortunately, the parent class
 of that module inherited from Class::Accessor.

 Test::Most exports 'set' and Class::Accessor calls a 'set' method.
 Oops.

 I'm trying to think of the best way to deal with this.  My first
 thought is to create a drop in replacement for Exporter which will not
 export a function if caller-can($function) *unless* the person
 explicitly lists it in the import list with a unary plus:

# 2008
use Foo; # exports nothing
use Bar; # exports set with Exporter::Safe

set() # Bar

# 2009 after upgrading some modules
use Foo; # new version in 2009 exports set
use Bar; # exports set with Exporter::Safe

set() # now Foo and triggers rm -rf / :)


Of course switching the order of imports gives the problems without
Exporter::Safe.

The upshot is that I believe there is no such thing as safe default
exports. Python gets this right with

import Foo
import Bar

Bar.set() # always works no matter what Foo suddenly starts doing.

It deals with long package names by doing

from Stupid.Long.Package import Name
Name.Foo

So, what would be interesting would be to find a way to bring the
short name in my current namespace ebenefits of Python to Perl and
abandon default exports entirely,

F

  use Test::Most plan = 3, '+set';

 Are there better strategies?

 Cheers,
 Ovid

 --
 Buy the book  - http://www.oreilly.com/catalog/perlhks/
 Personal blog - http://publius-ovidius.livejournal.com/
 Tech blog - http://use.perl.org/~Ovid/journal/



Re: Why is use_ok failing in this test script?

2008-05-17 Thread Fergal Daly
2008/5/17 David Fleck [EMAIL PROTECTED]:
 I hope someone can help out this novice test writer.  I have a module that
 runs several test scripts, and recently they have started to fail on some
 tester's machines.  The tests work fine for me, and I can't see anything
 in the Test::More documentation that tells me what's going on.

 An example test script starts like this:


  # Before `make install' is performed this script should be runnable with
  # `make test'. After `make install' it should work as `perl Gtest.t'

  #

  use Test::More; BEGIN { use_ok('Statistics::Gtest') };

  #

  my $twothreefile = t/2x3int.txt;
 [... rest of file follows ...]


 and, increasingly, the test fails, according to the emails I get and the
 test results I see on CPAN:


  /usr/bin/perl.exe -MExtUtils::Command::MM -e test_harness(0, 
 'blib/lib', 'blib/arch') t/*.t
  t/file_input..You tried to run a test without a plan at 
 t/file_input.t line 6.

As it says here, you ran a test before you set set the plan.

use Test::More tests = 1; # or however many tests you have
BEGIN { use_ok('Statistics::Gtest') };

is what you should be doing.

The puzzling thing is how this ever worked for you. The only thing I
can think of is that somehow a plan was being set from within
Statistics::Gtest,

F

  BEGIN failed--compilation aborted at t/file_input.t line 6.
   Dubious, test returned 255 (wstat 65280, 0xff00)
   ANo subtests run


 Line 6 is the 'use Test::More' line, which is copied pretty much straight
 from the POD.  But again, it works fine on my one local machine.  What's
 going on here? And how do I fix it?

 (Incidentally, I do declare a plan, a few lines further down in the test
 script:

  plan tests = scalar (@file_objects) * 17;

 but I didn't think that was needed in the BEGIN block.)

 --
 David Fleck
 [EMAIL PROTECTED]




Re: XS wrapper around system - how to test the wrapper but not the system?

2008-01-28 Thread Fergal Daly
You could make the called function mockable

int (*ptr_getaddrinfo)(const char *node, const char *service,
const struct addrinfo *hints,
struct addrinfo **res);

ptr_getaddrinfo = getaddrinfo

void mock_it(... new_ptr) {
  ptr_getaddrinfo = new_ptr;
}

so that when testing you're not calling the system one. It's a fairly
standard mocking technique, it just gets bit ugly in C because it's
not a dynamic language -  you have to replace all your calls to
getaddrinfo with calls to ptr_getaddrinfo - maybe there's some jiggery
pokery you could do to avoid that, I'm not sure.

The other alternative it to create a small library with a mock
getaddrinfo function in it and when compiling the tests, make sure it
gets linked in ahead of the libc but I fear that doing that in a
cross-platform way while maintaining your sanity may be tricky,

F


On 29/01/2008, Paul LeoNerd Evans [EMAIL PROTECTED] wrote:
 I'm finding it difficult to come up with a good testing strategy for an
 XS module that's just a thin wrapper around an OS call, without
 effectively also testing that function itself. Since its behaviour has
 minor variations from system to system, writing a test script that can
 cope is getting hard.

 The code is the 0.08 developer releases of Socket::GetAddrInfo; see

   http://search.cpan.org/~pevans/Socket-GetAddrInfo-0.08_5/

 for latest.

 The code itself seems to be behaving on most platforms; most of the test
 failures come from such things as different OSes behaving differently if
 asked to resolve a host called something.invalid, or quite whether any
 system knows the ftp service, or what happens if it wants to reverse
 resolve unnamed 1918 addresses (e.g. 192.168.2.2).

 The smoke testers page is showing a number of FAILs on most platforms not
 Linux (where I develop), probably because of assumptions the tests make
 that don't hold there any more. E.g. one problem I had was BSD4.4-based
 systems, whose struct sockaddr_in includes the sin_len field.

   http://cpantesters.perl.org/show/Socket-GetAddrInfo.html

 Does anyone have any strategy suggestions for this?

 --
 Paul LeoNerd Evans

 [EMAIL PROTECTED]
 ICQ# 4135350   |  Registered Linux# 179460
 http://www.leonerd.org.uk/




Re: lambda - a shortcut for sub {...}

2007-10-13 Thread Fergal Daly
On 12/10/2007, Bill Ward [EMAIL PROTECTED] wrote:
 On 10/11/07, A. Pagaltzis [EMAIL PROTECTED] wrote:
  * Eric Wilhelm [EMAIL PROTECTED] [2007-10-11 01:05]:
 http://search.cpan.org/~ewilhelm/lambda-v0.0.1/lib/lambda.pm
 
  If I saw this in production code under my responsibility, I'd
  submit it to DailyWTF. However, I have nothing against its use
  in code I'll never see. Carry on.
 
  This opinion brought to you by Andy Lester's Perlbuzz rant.

 What worries me is someone's gonna submit an otherwise useful module
 to CPAN that uses this feature.

I doubt it. Anyone who can produce a genuinely useful module on CPAN
is unlikely to want add a dependency for the sake of a few keystrokes.
There are people who won't even use better testing modules because
it would add a dependency,

F


Re: what's the right way to test a source filter?

2007-08-08 Thread Fergal Daly
I've never used source filters but if Perl allows you to extract the
post-filtered source code then I'd test that with a whole bunch of
snippets. If not then I'd test the compiled code again expected
compiled code by running both through B::Deparse (or something like
it, demerphq has a module for sub comparisons),

F

On 07/08/07, David Nicol [EMAIL PROTECTED] wrote:
 so I am closer than ever to releasing my way-cool source filter module,
 which is based on Filter::Simple.  Big question:  how do I write the test
 script?



Re: Test failures - I can't work out why

2007-04-29 Thread Fergal Daly

On 28/04/07, Eric Wilhelm [EMAIL PROTECTED] wrote:

# from Fergal Daly
# on Saturday 28 April 2007 06:28 am:

You don't have it as a prereq in Makefile.PL. It's possible the
machines running the test don't have it installed (people do weird
things to their perl instlls some times),

Like delete core modules?  I don't think it's a prereq issue.


It must be nice to live in a world where all bug reports come from
people with sane configurations :)

F



# from Paul LeoNerd Evans on Saturday 28 April 2007 05:29 am:

 /home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/
B.so: undefined symbol: Perl_Icheckav_save_ptr at
 /home/cpan/perl588/lib/5.8.8/XSLoader.pm line 70.

I think the problem is the $ENV{PERL} || 'perl' bit.  You want $^X.

I can't see any common differences between the machines it fails on,
 and the machines it passes on

If you look again, you might find that they all have something like this
is common:

  Perl: $^X = /home/cpan/perl588/bin/perl

I'm guessing that the PERL5LIB in the testing rig combined with your
test script forcing use of the system perl is causing perl5.6 or
whatever to try to load the .so for 5.8.8.

--Eric
--
The first rule about Debian is you don't talk about Debian
---
http://scratchcomputing.com
---



Re: Test failures - I can't work out why

2007-04-28 Thread Fergal Daly

You don't have it as a prereq in Makefile.PL. It's possible the
machines running the test don't have it installed (people do weird
things to their perl instlls some times),

F

On 28/04/07, Paul LeoNerd Evans [EMAIL PROTECTED] wrote:

I've got a large number of failures (9 fail vs. 6 pass) on one module of
mine, which is dragging my stats down quite a bit, and I've no idea why:

  http://cpantesters.perl.org/show/B-LintSubs.html#B-LintSubs-0.03

They all seem to fail on some variant of:

  t/01happyCan't load
'/home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/B.so' for 
module B: 
/home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/B.so: 
undefined symbol: Perl_Icheckav_save_ptr at 
/home/cpan/perl588/lib/5.8.8/XSLoader.pm line 70.

That looks very much like a problem in B.so itself. But my module,
B::LintSubs is just a single pure-perl module of that name, I don't go
anywhere near B itself, so why does B fail here?

I can't see any common differences between the machines it fails on, and
the machines it passes on (7, including my desktop at home I tested it
on).

Does anyone have any ideas?

--
Paul LeoNerd Evans

[EMAIL PROTECTED]
ICQ# 4135350   |  Registered Linux# 179460
http://www.leonerd.org.uk/




Re: Another non-free license - PerlBuildSystem

2007-02-21 Thread Fergal Daly

On 20/02/07, Arthur Corliss [EMAIL PROTECTED] wrote:

On Tue, 20 Feb 2007, Ashley Pond V wrote:

 I didn't want to feed this so responded personally to a couple off list.
 Y'all couldn't resist sharing your politics and goofs though so… I apologize
 to the disinterested if this just feeds it.

 I find it difficult to believe, being a middling hacker compared to some of
 you guys, that I'm the only one on this list who has ever written code that
 ended up used by a military group; or the only one who regretted it.

I've not only written code used by the military, but I also served in the
military.  Despite the idiots who like to portray us a baby killers I'm
proud of it.  And you're so surprised that I find you an offensive jackass
(that's right -- I looked at your site).

 I expressed interest in such a license getting hammered out by some experts
 because I don't like being a party to mass murder. Between 200,000 and
 750,000 (depending on whose figures you prefer) Iraqis have died at the hands
 of the US government since 1990. They can take my tax money to do it at the
 threat of prison but I would like to think it *might* be possible to stop
 them from taking my otherwise freely given work (the lack of Earth-moving
 nature of which is entirely irrelevant to any such debate) to do it. If such
 a license would be immaterial then so are all other petitions.

You're an idiot who thinks we're the blame for everything that's wrong in
the world.  That's your right, of course, and its my right to call you for
the bogus numbers.  Only a drooling, spoon-fed moron who's incapable of
research could come up with those kinds of errors.  Where's the proof of those
numbers?  At least sites like iraqbodycount.org actually give you access to
the database of incidents and reported body counts, and they're only up to
62k.  With the exception of Desert Storm this has been the safest war for
both sides we've ever conducted.


Read iraq body counts FAQ:

What we are attempting to provide is a credible compilation  of
civilian deaths that have been reported by recognized sources. Our
maximum therefore refers to reported deaths - which can only be a
sample of true deaths unless one assumes that every civilian death has
been reported.

In fact their criteria is that the death must be reported in at least
2 credible sources and given that credible journalists cannot
travel in Iraq this means the numbers are only somewhat related to
reality. So IBC's accurately counts something that just confuses the
issue.

The Lancet study on the other hand is the same methodology used in
Darfur, the Congo, the Balkans and a variety of other conflict zones.
Strangely the numbers have been accepted without argument for all
those other places but the Iraq studies are hotly disputed by all
kinds of people who know nothing about statistics and/or how to count
deaths in a war zone. They are generally not disputed by
statisticians.

F


This is the wrong kind of forum for this kind of stupidity.  Just code, damn
it, and quite whining.

--Arthur Corliss
  Bolverk's Lair -- http://arthur.corlissfamily.org/
  Digital Mages -- http://www.digitalmages.com/
  Live Free or Die, the Only Way to Live -- NH State Motto


Re: Another non-free license - PerlBuildSystem

2007-02-21 Thread Fergal Daly

On 20/02/07, Shlomi Fish [EMAIL PROTECTED] wrote:

Hi Ashley!

On Tuesday 20 February 2007, Ashley Pond V wrote:
 I didn't want to feed this so responded personally to a couple off
 list. Y'all couldn't resist sharing your politics and goofs though so…
 I apologize to the disinterested if this just feeds it.

 I find it difficult to believe, being a middling hacker compared to
 some of you guys, that I'm the only one on this list who has ever
 written code that ended up used by a military group; or the only one
 who regretted it.

 I expressed interest in such a license getting hammered out by some
 experts because I don't like being a party to mass murder. Between
 200,000 and 750,000 (depending on whose figures you prefer) Iraqis have
 died at the hands of the US government since 1990. They can take my tax
 money to do it at the threat of prison but I would like to think it
 *might* be possible to stop them from taking my otherwise freely given
 work (the lack of Earth-moving nature of which is entirely irrelevant
 to any such debate) to do it. If such a license would be immaterial
 then so are all other petitions.

 The license I'd love to see would be a Non-Governmental (Personal and
 Private Industry Only). One can crack wise or politicize the idea but
 it is worth bringing up. Whether or not others would honor such a
 license does not mitigate one's attempt to live ethically.


As you may well be aware the Free Software Definition:

http://www.gnu.org/philosophy/free-sw.html

Specifically says that the software should have:


The freedom to run the program, for any purpose.


The Open Source Definition ( http://www.opensource.org/docs/definition.php )
in articles 5 6 prohibit discrimination against persons or groups or against
fields of endeavour.

Thus, if you prohibit use of your code by militaries or otherwise government
entities, it won't be free software or open source. Furthermore, your code
will be rendered incompatible with the GPL and similar licences that can only
be linked against a certain subset of such licences. See for example:

http://www.dwheeler.com/essays/gpl-compatible.html

Now, why was free software defined as such that is available to be used for
any purpose? I don't know for sure, but I have my own reasons for that.

Let's suppose you and a few people make your software prohibited for used by
armed forces. Now there are also many anarchists in the world, who dislike
governments, and some of them are going to restrict their software from being
used by governments. Then I would decide that due to the fact I hate racism,
then my software cannot be used for racist purposes. And a bunch of
Antisemites are going to restrict their software from being used by Jews.

As a result, the open-source software world will become fractured by such
restricted software, and people who would like to make use of various pieces
of software for their own use will have to carefully look at all of their
licences for such incompatibilities with their purposes.

Furthermore, let's suppose I'm a consultant who sets up web-sites. I'd like to
write a Content Management System for facilitating my present and future
work. However, since I don't know who my future clients are going to be I
won't be able to use any of this software for fear my future client would be
a military group, a government, a racist person or organisation, a Jew or
someone whose first name starts with the letter S. Eventually, I may have
to implement everything from scratch.


Isn't that the point? If you object to group A then you'll be quite
happy when people who want to work with group A have to implement
everything from scratch. This is exactly what happens if you base your
code on GPL code and then want to turn it into a closed product.

Of course it makes you less likely to receive code contributions from
other but that's obviously the price you're willing to pay for your
politics,

F


As someone wise has once commented The road to hell is paved with good
intentions, and what I said just proved it.

I find a lot of value in keeping open source software usable by everybody for
every purpose. If you want to make your software unlike this, you have the
right to, but be aware that I and many other people won't get near it with a
ten foot pole, and it won't become part of most distributions, or be used by
most open-source projects. So you'll essentially make it unusable.

So you should choose whether you want to make your software popular, or you
want to protect its abuse but also prevent almost every legitimate use of
it.

Regards,

Shlomi Fish

-
Shlomi Fish  [EMAIL PROTECTED]
Homepage:http://www.shlomifish.org/

Chuck Norris wrote a complete Perl 6 implementation in a day but then
destroyed all evidence with his bare hands, so no one will know his secrets.



Re: Delete hate speech module

2007-02-08 Thread Fergal Daly

On 08/02/07, imacat [EMAIL PROTECTED] wrote:

On Thu, 8 Feb 2007 01:28:12 -0800
Eric Wilhelm [EMAIL PROTECTED] wrote:
 # from Andy Lester
 # on Wednesday 07 February 2007 10:25 pm:
  I'd just read of Time::Cube, a disjointed rant full of hate speech.
  This is the kind of content that is most deserving of deletion from
  CPAN. Would the responsible parties please go nuke this, please?
 Given that the license does not allow it to live on CPAN, I'd say we
 have to remove it.

Correction: Time::Cubic.

As I'm not a citizen of U.S., I have no idea on this Time Cube
theory thing till now.  I paid a visit.  Well, even if it comes with an
valid open source license, I do not agree it's proper to allow such
hatred words on CPAN.  That is really very bad.

I understand that for some psychos (may or may not be the Time Cube
followers) the best way is to ignore them rather than fight with them.
But since hatred is involved in this Time::Cubic, psycho or not this
will hurt the public image of CPAN, which many people work hard to make
it better for a long time.  It would be very bad if Time Cube followers
gather and plan on killing the Jews or educators on CPAN with a Artistic
license.  CPAN may not be always serving the public interests, but must
not hurt the public, nor become a tool to hurt the public.

This is only my humble opinion.


While I do agree that this should be taken down since CPAN is
breaching the license, I would point out that appears to be a joke.
There are several LOLs in the license and the code and the whole
bantown thing seems to be a project to produce amusing but useless
code - an irc bot that opens a channel, invites people at random and
kicks them out as soon as they join, a program to randomly trash the
registers of a running process. Also any code containing

sub dongers {

has to be a joke. Sadly www.timecube.com on which this is based is not as funny,

F



--
Best regards,
imacat ^_*' [EMAIL PROTECTED]
PGP Key: http://www.imacat.idv.tw/me/pgpkey.txt

Woman's Voice News: http://www.wov.idv.tw/
Tavern IMACAT's: http://www.imacat.idv.tw/
TLUG List Manager: http://lists.linux.org.tw/cgi-bin/mailman/listinfo/tlug




Re: James Freeman's other modules (was: Re: CGI::Simple)

2007-01-12 Thread Fergal Daly

Changing the subject from Keenan to Freeman (James Keenan is not MIA),

F

On 12/01/07, Andy Armstrong [EMAIL PROTECTED] wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12 Jan 2007, at 10:16, David Landgren wrote:
 Do we wait until someone else manifests a need to dust off one of
 them to hand over maintenance? Or do we forget about it until next
 time? If it's worth it, then I would volunteer.

Actually I was thinking of volunteering for the whole lot of them -
but then decided that they're probably not that valuable to anyone.

I was also wondering whether - given that backpan exists so people
can always find them if they really want them - there shouldn't be a
mechanism for removing modules that are unloved and unused.

- --
Andy Armstrong, hexten.net

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (Darwin)

iD8DBQFFp2e0woknRJZQnCERAquJAJ4hKkNHZKS3u3JnhRbcPd9k7xUm9wCfaKfK
M8tnMc8hzZxL8BlEEyAMtVg=
=k1gg
-END PGP SIGNATURE-



Re: Benefits of Test::Exception

2006-12-31 Thread Fergal Daly

On 31/12/06, Paul LeoNerd Evans [EMAIL PROTECTED] wrote:

I recently stumbled upon Test::Exception, and wondered if it might make
my test scripts any better.. So far I'm struggling to see any benefit,
for quite a lot of cost.

Without using this module, my tests look like:

eval { code() };
ok( $@, 'An exception is raised' );

(and possibly either of)
like( $@, qr/some string match/, 'Exception type' );
(or)
ok( [EMAIL PROTECTED]isa( Thing ), 'Exception type' );
(to check the type)

Whereas, if I want to use the module, I have to first note that it isn't
standard install, so I should start the test with something like:

eval { require Test::Exception; import Test::Exception; };
my $can_test_exception = $@ ? 0 : 1;

Then each test that might use it should be:

SKIP: {
skip No Test::Exception, 1 unless $can_test_exception;

dies_ok( sub { code() },
 'An exception is raised' );
}

So, a lot more code, to achieve the same end result... Plus, I'm now in
the situation where if Test::Exception isn't installed, the test won't
be run at all.


I think the code about should die comlaining about dies_ok() is
unknown. So you need to do even more.


Have I missed something here? Does Test::Exception provide me with some
greater functionallity I haven't yet observed? Or should I just not
bother using it?


Don't you get the same problem with any non-standard test module?

If you alread yhave some CPAN dependencies then adding another for
testing is perfecctly reasonable. It would be nice if the various CPAN
tools could understand the difference between a runtime dependecy and
a test-time one though,

F


Re: Benefits of Test::Exception

2006-12-31 Thread Fergal Daly

On 31/12/06, Paul LeoNerd Evans [EMAIL PROTECTED] wrote:

On Sun, Dec 31, 2006 at 02:13:47AM +, Fergal Daly wrote:
 I think the code about should die comlaining about dies_ok() is
 unknown. So you need to do even more.

No it doesn't... This is one of those things about perl - code that
looks like a function call is never checked to see if the function
exists until runtime:

  #!/usr/bin/perl
  use warnings;
  use strict;

  print Here I have started running now\n;

  foobarsplot();

  ^-- won't complain until runtime.

That's what gave me the motivation to write B::LintSubs, by the way:

  http://search.cpan.org/~pevans/B-LintSubs-0.02/


I just forgot that SKIP actually doesn't execute the code (I was
thinking it just marked the test results as to be ignored).


 Don't you get the same problem with any non-standard test module?

Yes; but Test::More seems to be installed as part of whatever the
testing core is on various things that automatically test my CPAN
modules. I note whenever I upload something, lots of machines around the
world manage to automatically test it. I use Test::More everywhere and
they can cope.


I use whatever test modules I feel like (for example I always use
Test::NoWarnings) and the same machines test my modules without
problems. The automatic testing tools install whatever deps are
necessary (assuming they're listed as deps in Makefile.PL). Are you
seeing brokenness or are you just expecting it?

F


 If you alread yhave some CPAN dependencies then adding another for
 testing is perfecctly reasonable. It would be nice if the various CPAN
 tools could understand the difference between a runtime dependecy and
 a test-time one though,

EU::MM can't, but I believe Module::Build can. That said, the consensus
on #perl/Freenode is that the latter isn't really ready yet, so just use
the former. Ho hum..

--
Paul LeoNerd Evans

[EMAIL PROTECTED]
ICQ# 4135350   |  Registered Linux# 179460
http://www.leonerd.org.uk/


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFl5upqRXzCRLr5iQRAh+xAKCL/rKxP/QmZc/4lxnFeQyDKxNEqACfZjoU
EODl67ZC0bW/jCJvmmUMGIw=
=1ffY
-END PGP SIGNATURE-





Re: spamming cpan? was [Fwd: Perl-Freelancer needed]

2006-10-05 Thread Fergal Daly

Yeah, I was thinking of applying exactly because it said in all caps

PLEASE DO NOT APPLY IF YOU PERSONALLY DO NOT FULFILL THIS REQUIREMNT

F


On 05/10/06, Andy Armstrong [EMAIL PROTECTED] wrote:

On 5 Oct 2006, at 16:39, Jonathan Rockway wrote:
 Did anyone else get a message like this via their CPAN alias?  I think
 it's pretty odd that someone would mail me personally with a message
 like this.  Instead, it looks like someone just iterated over their
 local CPAN mirror and sent everyone an e-mail.  If this is the
 case, I'm
 going to report it to spamcop.  If that's not the case, I'm going to
 nicely suggest that they post to jobs.perl.org instead.

Yup, I got it too. They way it's phrased suggested to me that it had
been sent to multiple recipients.

--
Andy Armstrong, hexten.net




Re: CPAN::Forum

2005-02-03 Thread Fergal Daly
There are two useful things that could come from having some PAUSE
interaction

As an author of several modules, I'd like to be able to tick a box that says
monitor all forums for my modules Also, it would be nice if users can see
that the author is monitoring a module, it saves having to post a hey
everybody I'm monitoring this module type of message for each one,

Fergal


On Fri, Feb 04, 2005 at 02:40:09AM +0200, Gabor Szabo wrote:
 On Wed, 2 Feb 2005, Nicholas Clark wrote:
 
 The same hack as rt.cpan.org uses - attempt a login on pause.cpan.org
 using the ID and password provided. If PAUSE accepts it, then you know
 it's the real thing.
 
 That would mean my server if cracked could be used to collect PAUSE
 passwords. I am not sure I'd like to have that responsibility.
 
 
 I am thinking of allowing users to use a screen-name and if I manage
 to authenticate that you are a PAUSE user (using the suggested
 @cpan.org e-mail trick) then you will be able to uese the
 PAUSE::yourname screen name.
 
 Sounds like overcomlicating things.
 
 But it is nearly 3 am.
 
 Gabor
 
 


Re: Let's eliminate the Module List

2004-08-19 Thread Fergal Daly
On Thu, Aug 19, 2004 at 05:24:57PM +0100, Jose Alves de Castro wrote:
 On Thu, 2004-08-19 at 16:47, Christopher Hicks wrote:
  On Thu, 19 Aug 2004, Hugh S. Myers wrote:
  
   It seems to me that ANY thing that contributes to the solution set of 
   'How do I find the module I'm looking for?' needs to be kept until it 
   can be replaced with something of equal or greater value.
  
  search.cpan.org seems to be of greater value than the modules list 
  according to most of the people that have chimed in.
 
 Try asking beginners what they think. I believe it is easier for them to
 look at a long list of modules then searching for a specific one,
 particularly because they often don't know what they should be looking
 for.

The problem is that the list is missing many modules and in some cases it is
missing the right module for a particular job while listing other inferior
modules and since no one is adding to the list, this can only get worse.

 Anyway, I like to have a long list of modules to show my Java friends
 and say see?

If we had keywords you could just search on a keyword and show them that
list instead,

F



Re: Future of the Module List

2004-07-20 Thread Fergal Daly
On Tue, Jul 20, 2004 at 06:15:49PM +1200, Sam Vilain wrote:
 I nominate the
 
  Review::*
 
 Namespace for author-submitted module indexes and in-depth reviews, in 
 POD format.  I think this has a number of advantages.  Let's use the 
 infrastructure we already have, no?

Interesting, but what comes after Review:: if it's Review::Text::Balanced
then how do we get multiple reviews Text::Balanced or are you talking about
something else entirely?

F


Re: META.yml keywords

2004-07-18 Thread Fergal Daly
On Sat, Jul 17, 2004 at 03:40:52PM +0200, A. Pagaltzis wrote:
 Which was exactly the purpose: to be able to make sure that the
 list with official keywords really does only contain official
 keywords, so a release tool can complain about misspellings f.ex.
 If you simply allow both in a single list, then netwrok will go
 unnoticed and make your module invisible to searches with the
 correct keyword.
 
 I don't think the existence of two lists should matter to the
 indexer -- official keywords in the freeform list should have the
 same value as official ones in the fixed keys list. That sort of
 defeats the above point, I guess, but a list for fixed keys only
 still helps those who want its benefits.
 
 It might suffice to have the release tool check the list and tell
 the user which keywords are official and which aren't, but I
 don't know if that is helpful enough -- I personally would like
 to be able to tell it to choke on all mistakes *except* those I
 specifically declared as known non-official ones.

The only benefit I can see is that of spell-checking and that would be
better done by an actual spell-checker. Isn't it important not mis-spell any
keywords, regardless of their officialness?

F


Re: META.yml keywords

2004-07-17 Thread Fergal Daly
On Sat, Jul 17, 2004 at 01:32:36PM +0200, A. Pagaltzis wrote:
 * Randy W. Sims [EMAIL PROTECTED] [2004-07-17 12:45]:
  There is, however, another advantage to the catagory approach:
  Searching would likely be more consistent. It would help
  authors to place their modules so that they can be found with
  similar modules. It would also help ensure that users looking
  for a particular type module will get back a result set that is
  likely to contain all/most of the modules of that type.
 
 Why does it have to be either/or?
 
 There could be two keyword lists, one with fixed keywords, and
 the other freeform. Their names would have to be chosen carefully
 to suggest this as the intended use (rather than filling both
 with the same keywords) -- maybe ``keywords'' and
 ``additional_keywords'' or something.

I agree that If there is to be an official list of keyowrds then it
shouldn't be either/or. The officials haven't regenerated the module list
for 2 years, there's no reason to think that the keyword officials will stay
up to date.

That said, I don't think having 2 lists is useful. The author should supply
a single list of keywords. Those that are on the official list are on the
official list, those that aren't aren't. The search engine/indexer will be
far better at figuring that out than the module author. Otherwise you are
just obliging the authors to keep track of the official list and move
keywords around in their meta info as the official list chnages.

It would be up to the search engine to perhaps give more weight to official
keywords. The search engine could also maintain official synonyms so that
postgres and pg are indexed together,

F



Re: Finding prior art Perl modules (was: new module: Time::Seconds::GroupedBy)

2004-07-14 Thread Fergal Daly
On Wed, Jul 14, 2004 at 06:08:16PM +0100, Leon Brocard wrote:
 Simon Cozens sent the following bits through the ether:
 
  The searching in search.cpan.org is, unfortunately, pretty awful. At some
  point I plan to sit down and try using Plucene as a search engine for
  module data.
 
 I thought that would be a good idea too, so I tried it. It works
 *fairly* well.
 
   http://search.cpan.org/dist/CPAN-IndexPod/

Does META.yaml have a place for keyowrds? It would be nice if it did and if
search.cpan.org indexed it. That would mean that it would be no longer
necessary to name your module along the lines of

XML::HTTP::Network::Daemon::TextProcessing::Business::Papersize::GIS

so that people can find it,

F



Re: Finding prior art Perl modules (was: new module: Time::Seconds::GroupedBy)

2004-07-14 Thread Fergal Daly
On Wed, Jul 14, 2004 at 10:34:08PM +0100, Tim Bunce wrote:
 On Wed, Jul 14, 2004 at 06:30:59PM +0100, Fergal Daly wrote:
  XML::HTTP::Network::Daemon::TextProcessing::Business::Papersize::GIS
  
  so that people can find it,
 
 That's what the Description field is for.

There's a Description field? I accept responsibility for not knowing about
this, I've never made an effort to see what is available. However, if
search.cpan.org had allowed me to search by Description field I probably
would have included one in all of my modules,

F



Re: CPAN Rating

2004-06-16 Thread Fergal Daly
On Wed, Jun 16, 2004 at 12:05:02PM +0100, Nicholas Clark wrote:
 All volunteer organisations work in roughly the same way - if you want to
 get a job done, you have to *start* it yourself. Others may well join in
 and help once they see that it's a good idea, but things don't get started
 because someone would like it.
 
 [This is an oversimplification. You may be able to persuade someone else
 that they also care about it enough to do it. But this is as if that person
 starts on his/her own as above. Likewise someone may be able to get others
 to start a new project for them, but generally they have earned this by
 visibly contributing their own blood sweat and tears to something else
 already.]
 
 No-one is stopping you setting up a ratings system.

Maybe Nadim should simply start the ball rolling by picking an interesting
module and posting a few comments (+ or -) on the list and seeing the
reaction. Of course there may be a problem with the on/off topicness of that
for the list. Perhaps Simon Cozen's code review list is a better place,
although in these cases the code review would be involuntary which probably
wasn't what Simon intended.

The alternative is to start a new list but that might have a larger than
normal bootstrapping problem,

F


Re: CPAN Rating

2004-06-16 Thread Fergal Daly
On Wed, Jun 16, 2004 at 06:39:22PM -0300, SilvioCVdeAlmeida wrote:
 Let's write it better:
 1. FORBID any module without a meaningful readme with all its (possibly
 recursive) dependencies, its pod and any other relevant information
 inside.

Having the dependencies easily visible is a good idea but rather than
banning those modules which don't do, it should be done automitcally by the
CPAN indexer, all the info is there.

 2. Branch a last-version-only CPAN_modules_by_category, without authors
 folders, a kind of a fast_food_CPAN_modules_by_category.

Could you explain this please, I don't know what you mean.

F


Re: running tests

2004-04-03 Thread Fergal Daly
On Fri, Apr 02, 2004 at 04:59:41PM -0600, Andy Lester wrote:
  Even if you have a smoke bot, you presumably run the tests (depends on the
  size of the suite I suppose) before a checkin and it's convenient to know
  that the first failure message you see if the most relevant (ie at the
  lowest level). Also when running tests interactively it's nice to be able to
  save even 30 seconds by killing the suite if a low level test fails,
 
 Sure, but even better is to run only the tests that need to be run,
 which is a key part of prove.  You can run prove -Mblib t/mytest.t
 instead of the entire make test suite.

If the suite's big enough to warrant a bot then that makes sense but many of
my modules have test suites that complete within a fairly short time.

I tend to run the relevant test until it passes and then run the suite
before checkin. I can pipe the verbose output the whole suite into less and
know that the first failure is probably the most important one.

F



Re: running tests

2004-04-03 Thread Fergal Daly
On Sat, Apr 03, 2004 at 01:37:03AM +0200, Paul Johnson wrote:
 Coming soon to Devel::Cover (well, on my TODO list anyway):
 
  - Provide an optimal test ordering as far as coverage is concerned - ie
tests which provide a large increase in coverage in a short time are
preferred.  There should also be some override to say run these tests
first anyway because they test basic functionality.

For me, the perfect order of display would be:

Coverage A is a subset of Coverage B implies that Test A must be displayed
before Test B. You could call Test A a subtest of Test B.

You then order all the tests by their coverage increase and attempt to
display them in that order (while satisfying the above rule).

This will ensure that low level precedes high level (because the low level
tests will be subsets of the high level ones).

You need to consider subset in terms of packages or modules rather than
function, otherwise if lowlevel.t tests func1() and func2() but highlevel1.t
only calls func1 then there is no subset relationship. You also need to
keep your test scripts kind of modular .

On the other hand, if you are trying to save time on your test suite then
the same information as above can be used to cut corners.

You run the tests in coverage increase order until you have run out of tests
that will increase the coverage, then you stop. The only exception is if a
Test C fails, then you run it's largest subtest (Test B) and if Test B fails
then you run Test B's largest subtest etc. Until one of them doesn't fail.
Then you have located the failure as well as you can with the given tests,

F


Re: running tests

2004-04-02 Thread Fergal Daly
On Fri, Apr 02, 2004 at 02:51:11PM -0600, Andy Lester wrote:
  coded correctly. So it's desirable to see the results of the lower level
  tests first because running the higer level tests could be a waste of time.
 
 But how often does that happen?  Why bother coding to optimize the
 failures?
 
 Besides, if you have a smokebot to run the tests for you, then you don't
 care how long things take.

It's more the time spent looking at the test results rather than the time
spent running the tests. So actually it's the result presentation order that
matters. Basically you want to consider the failure reports starting from
the lowest level as these may make the higher level failures irrelevant.

The order the tests actually ran in should be irrelevant to the outcome but
if you're running from the command line the run order determines the
presentation order.

Even if you have a smoke bot, you presumably run the tests (depends on the
size of the suite I suppose) before a checkin and it's convenient to know
that the first failure message you see if the most relevant (ie at the
lowest level). Also when running tests interactively it's nice to be able to
save even 30 seconds by killing the suite if a low level test fails,

F


Re: [Fwd: [perl #25268] h2xs does not create VERSION stubs]

2004-02-03 Thread Fergal Daly
I saw that on p5p. It seems to be an idea who's time has come!

John has taken a different approach. A is compatible with B if A = B (for the 
standard version meaning of =) and it hasn't been specifically declared 
incompatible.

An upside is that you can give a reason why the current version is not 
compatible with version A.

A downside is that I think negative declarations might be harder to maintain. 
As you make more and more changes you must continually rethink the list of 
versions with which you are incompatible and why.

Making positive declarations means that you can just let your compatibility 
information grow. You don't have to think about anything except the 
difference between your new version and it's immediate predecessor,

F

On Tuesday 03 February 2004 05:08, david nicol wrote:
 So here's what I got back from perlbug
 
 
 -- 
 david nicol
  shift back the cost. 
www.pay2send.com
 
 Encapsulated message
 
 
 [perl #25268] h2xs does not create VERSION stubs
 Date: Saturday 02:24:27
 From: John Peacock via RT [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Reply to: [EMAIL PROTECTED]
 
 See CPAN/authors/id/J/JP/JPEACOCK/version-Limit-0.01.tar.gz for a way to
 do this using the version module and Perl 5.8.0+
 
 
 
 End of encapsulated message



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-30 Thread Fergal Daly
On Fri, Jan 30, 2004 at 03:02:53PM +0100, khemir nadim wrote:
 What I meant is that we shouldn't have two ways (and 2 places) of telling
 what we need for our modules to work.

I agree, there should be only one place where Some::Module's compatibility
information is declared, whether that's in Some::Module's Build.PL or in
Foo/Bar.pm, is not really important, what's important is that Some::Module's
developer has to figure out the details, not Some::Module's user.

 Other have pointed some problems with your scheme so I won't repeat them
 here. I understand what you want to achieve and I think it's good but please
 keep it in one place. Can't you coordinated your efforts with Module::Build
 so
 
 # old example
my $build = Module::Build-new
  (
   module_name = 'Foo::Bar',
   license = 'perl',
   requires = {
'perl'   = '5.6.1',
'Some::Module'   = '1.23',
'Other::Module'  = '= 1.2, != 1.5,  2.0',
   },
  );
 
 ...
  requires = {
   'perl'   = '5.6.1',
   'Some::Module'   = 'COMPATIBLE_WITH 1.23', # or the like
   'Other::Module'  = '= 1.2, != 1.5,  2.0',
  },

There should be no need for COMPATIBLE_WITH.

'Some::Module'   = '1.23'

should work fine with any whacko versioning scheme that anyone ever came up
with if Module::Build is behaving correctly.

Behaving correctly means letting Some::Module-VERSION decide what is
compatible and what is no. This is the officially documented way to do it.

MakeMaker almost does this. From a quick look at the source, I think
Module::Build definitely doesn't. Instead, it attempts to find the version
by snooping around in the .pm file. So it basically ignores any custom
VERSION method. It should be fairly easy to fix that in both cases.

 Instead for drawing in a new module that most _won't_ use, you make it in
 the main stream new installer.

The new module is designed to make it easy for people to have a custom
VERSION method that does something better than the current default. However,
depending on this new module is obviously a problem - extra dependencies are
no fun for anyone. The next version should help solve that,

F


Re: VERSION as (interface,revision) pair and CPAN++

2004-01-29 Thread Fergal Daly
Hi Nadim,

The difference is that Module::Build forces the Foo::Bar's author to work
out what current versions of Some::Module and Other::Module are suitable and
to try to predict what future version will still be compatible. This is time
consuming and error prone (predicting the future isn't easy) and it has to
be done for every module that requires these other modules. In fact I think
most module authors do not test these things thoroughly - I know I don't,
it's just too much of a pain.

If Some::Module and Other::Module used Version::Split for their version
information then Foo::Bar's author could just say well I developed it with
Some::Module 1.23 and Other::Module 1.2 so only accecpt a version that is
declared to be compatible with those.

That way all the work on building the compatibility information is only done
once and it's done by Some::Module and Other::Module's authors, which is
good because they're the people who should know most about their own
modules. Foo::Bar's author never has to change his requires just because
Other::Module 1.9 has been released and works in a different way.

You also get the interesting side effect that if Foo::Bar's tests all pass
when using Some::Module 1.23 and they fail with 1.24 (which has been
declared to be comapatible) then both Foo::Bar's and Some::Module's authors
can be informed about it and try to work out who has the bug,

F


On Tue, Jan 27, 2004 at 08:54:47AM +0100, khemir nadim wrote:
 Hmm, isn't that what Module::Build is already offering to you?
   use Module::Build;
   my $build = Module::Build-new
 (
  module_name = 'Foo::Bar',
  license = 'perl',
  requires = {
   'perl'   = '5.6.1',
   'Some::Module'   = '1.23',
   'Other::Module'  = '= 1.2, != 1.5,  2.0',
  },
 );
   $build-create_build_script;My 2 cents, Nadim.
 
 



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-29 Thread Fergal Daly
Yes it's confusing, I'm having trouble following bits of it, I'm sure anyone
else who's actually bothering is too. Hopefully all the confusion will be
gone at the end and only clarity will remain, that or utter confusion - it
could end up either way really.

To see why the current situation is most definitely broken, take the example
of Parse::RecDescent again. 1.90 changed in a fundamental way. Using the
current system, what should the author have done? Calling it 2.0 would be no
good because

use Parse::RecDescent 1.84;

works fine with 2.0 and CPAN.pm would download 2.0 if you told it you need
at least 1.84.
 
The correct thing to do was to release Parse::RecDescent2 v1.0 which means
that CPAN should be cluttered up with copies of modules with numbers on the
end including perhaps Net::POP32 which might be the 2nd version of Net::POP3
or it might be the 32nd of Net::POP or it might be an implementation of some
future POP32 protocol.

In a serious production environment you should be doing exactly what you do
but when you want to try out some cool looking module, you shouldn't have to
worry about with the entire revision history of all it's dependencies and
all their dependencies and so on, it should just work or fail at compile
time,

F

On Wed, Jan 28, 2004 at 11:08:23PM -0500, Lincoln A. Baxter wrote:
 Phew... Only one comment:  KISS (Keep It Simple Stupid)
 
 This is WAY too confusing!  No one will be able to figure it out, or
 want to.  What we have now is not really that broken, especially if one
 regression tests his applications when new versions of modules are
 installed.  
 
 Actually, we build our offically supported perl tree which we deploy to
 all of our boxes, and all of our applications use.  And when we upgrade
 things, we build a whole new tree, which we regression test every
 application with before we roll it into production.
 
 No fancy versioning emnumeration scheme can replace this testing, and
 what we have now works well enough (I think). Most module authors I
 think are pretty good about documenting what they change in the Changes
 file. 
 
 Lincoln
 
 
 On Wed, 2004-01-28 at 00:28, David Manura wrote:
  Fergal Daly wrote:
  
   On Saturday 24 January 2004 18:27, David Manura wrote:
   
  (1) All code that works with Version A will also work with subsequent Version B. 
  (e.g. adding new functions)
  
  (2) There exists code that works with Version A but will not work with Version 
  B. (e.g. changing existing function signatures)
  
  (3) There exists code that works with Version A, will not work with Version B, 
  but will work with an even more future Version C.  (probably a rare case)
  
  To handle #1 and #2, we could require all interface version numbers be of the 
  form x.y such for any two increasing interface numbers x.y and u.v, assertion #1 
  is true iff x=u and v=y.   Assertion #2 is true iff the opposite is true (i.e. 
  x!=u or vy).  There is no use for long version numbers as mentioned (e.g. 
  1.2.3.4).
   
   
   I think this might make more sense alright and I'll probably change V::S to work 
   like that.
   However I don't agree with having no use for longer version numbers.
   
   For a start, people do use them and I don't want to cut out something
   people use.
   
   Also, when you have 1.2 and you want to experiment with a new addition
   but you're not sure if you have it right you can release 1.2.1.1 which is
   implicitly compatible with 1.2 . If you then think of a better interface you can
   release 1.2.2.1 which would still be compatible with 1.2 but would have no
   implied relation to 1.2.1.1. You can keep releasing 1.2.x.y until you get to
   say 1.2.6.1 at which point you're happy. Then you can rerelease that as 1.3
   and declare it compatible with 1.2.6.1 .
   
   This let you have development tracks without having to including lots of
   explicit compatibility relations. Branching and backtracking is an essential
   part of exploring so supporting it without any effort for the author is good.
   
   So to rephrase, B implements the interface of A (say B = A where =
   is like implies in maths) if
   
   (
 version_head(A) == version_head(B) and
 version_tail(A)  version_tail(B)
   )
   or
   (
 version(B) begins with version(A)
   )
   
   where version_head means all except the last number and version_tail means
   the last number
   
   So 1.2 = 1.1, 1.2.1 = 1.2, 1.2.2 = 1.2.1
   2.1 not = 1.1 but you could declare it to be true.
   1.2.2.1 = 1.2 but 1.2.2.1 not = 1.2.1.1
   
   and = is a transitive relation, just like implies in maths, so they
   can be chained together. 1.2.1 = 1.2 and 1.2 = 1.1 means 1.2.1 = 1.1.
   
   So an extension causes an increase and a branch which can be abandonned
   requires adding 2 more numbers. Actually this is exactly the same as CVS
   and presumably for the same reason.
  
  
  I'm not sure branching maps cleanly onto the interface versioning scheme as 
  shown above.  Let's say you have

Re: (fast reply please!) Idea for new module: A bridge between Perl and R-project

2004-01-29 Thread Fergal Daly
On Thursday 29 January 2004 19:50, Graciliano M. P. wrote:
 I'm working on a module that make a bridge between the R-project
 intepreter and Perl. Actually I need to have this done today, soo, I will
 ask for a fast reply. Thanks in advance.

It would help if we knew what the R-Project was,

F



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-25 Thread Fergal Daly
On Saturday 24 January 2004 18:27, David Manura wrote:
 (1) All code that works with Version A will also work with subsequent Version B. 
 (e.g. adding new functions)
 
 (2) There exists code that works with Version A but will not work with Version 
 B. (e.g. changing existing function signatures)
 
 (3) There exists code that works with Version A, will not work with Version B, 
 but will work with an even more future Version C.  (probably a rare case)
 
 To handle #1 and #2, we could require all interface version numbers be of the 
 form x.y such for any two increasing interface numbers x.y and u.v, assertion #1 
 is true iff x=u and v=y.   Assertion #2 is true iff the opposite is true (i.e. 
 x!=u or vy).  There is no use for long version numbers as mentioned (e.g. 1.2.3.4).

I think this might make more sense alright and I'll probably change V::S to work like 
that.
However I don't agree with having no use for longer version numbers.

For a start, people do use them and I don't want to cut out something
people use.

Also, when you have 1.2 and you want to experiment with a new addition
but you're not sure if you have it right you can release 1.2.1.1 which is
implicitly compatible with 1.2 . If you then think of a better interface you can
release 1.2.2.1 which would still be compatible with 1.2 but would have no
implied relation to 1.2.1.1. You can keep releasing 1.2.x.y until you get to
say 1.2.6.1 at which point you're happy. Then you can rerelease that as 1.3
and declare it compatible with 1.2.6.1 .

This let you have development tracks without having to including lots of
explicit compatibility relations. Branching and backtracking is an essential
part of exploring so supporting it without any effort for the author is good.

So to rephrase, B implements the interface of A (say B = A where =
is like implies in maths) if

(
  version_head(A) == version_head(B) and
  version_tail(A)  version_tail(B)
)
or
(
version(B) begins with version(A)
)

where version_head means all except the last number and version_tail means
the last number

So 1.2 = 1.1, 1.2.1 = 1.2, 1.2.2 = 1.2.1
2.1 not = 1.1 but you could declare it to be true.
1.2.2.1 = 1.2 but 1.2.2.1 not = 1.2.1.1

and = is a transitive relation, just like implies in maths, so they
can be chained together. 1.2.1 = 1.2 and 1.2 = 1.1 means 1.2.1 = 1.1.

So an extension causes an increase and a branch which can be abandonned
requires adding 2 more numbers. Actually this is exactly the same as CVS
and presumably for the same reason.

 To handle #3, which is more rare under this new proposal, the module probably 
 will need to provide a compatibility map as suggested:
 
use Version::Split qw(
2.1 = 1.1
);

 That is, code compatible with 1.1 is compatible with 2.1 but might not be 
 compatible with 2.0 such as if 2.0 removed a function present in 1.1 only for it 
 to appear in 2.1.  Furthermore, code compatible with 1.2 may or may not be 
 compatible with 2.1.  The above use statement would consider them to be 
 incompatible, but how would we express compatibility if they are actually 
 compatible?  Could we do this?
 
use Version::Split qw(
2.1 = 1.2
);

 Now, code compatible with 1.2 is known to be compatible with 2.1.  Code 
 compatible with 1.1 (or 1.0) is implicitly known to be compatible with 1.2, 
 which in turn is known to be compatible with 2.1.  Code known to be compatible 
 only with 1.3, however, remains considered incompatible with 2.1.  The above 
 does not suggest that code compatible with 2.1 is compatible with 1.2, rather 
 the reverse.

Yes. We declare 2.1 = 1.2 and we know 1.2 = 1.1 so we get 2.1 = 1.1 and 1.0
but we can prove nothing about 2.1 = 1.3, it could be true or false and we're
assuming that if we can't prove we don't want it.

   Are you saying that having split our current version number into 2 parts, I
   should have actually split it into 3? One to indicate the interface, one to
   indicate the revision and one to indicate how much code changed?
 
 I questioned combining the interface version and amount-of-code-change version 
 into one number.  However, could we combine the bug-fix-number and 
 amount-of-code-change number?  Are these really different?  A major internal 
 refactoring could be fixing bugs even if we never discover them.  It could be 
 adding new bugs as well, but bug fixes can also inadvertently introduce new 
 bugs.  I propose these two be combined, such as maybe x.y_n, where x.y is the 
 refactoring part and n is the bug fix, or maybe just x.y.z to eliminate the 
 distinction all-together.
 
 Given a combined refactoring+bugfix number, does the number hold any 
 significance?  You would expect 1.2.15 to be more stable that 1.2.14 as it is 
 probably fixed a bug.  Alternately, it might have made a small change to an 
 algorithm--i.e. refactoring.  We don't know.  We would also expect 2.0.1 to be 
 better implemented/designed that 1.2.14, as the 2.x effort probably did some 

Re: cpan name spaces (was: Re: Re3: Re: How about class Foo {...} definition for Perl? )

2004-01-21 Thread Fergal Daly
On Wed, Jan 21, 2004 at 03:53:34AM -0500, Terrence Brannon wrote:
 I am author maintainer of the Parse::RecDescent::FAQ - what happened 
 vis-a-vis version compatibility? I have been far away from the mechanics 
 of Parse::RecDescent for quite awhile.
 
 And yes, please email me something that you want put in there.

From the Changes file:

1.90Tue Mar 25 01:17:38 2003


- BACKWARDS INCOMPATIBLE CHANGE: The key of an %item entry for
  a repeated subrule now includes the repetition specifier.
  For example, in:

sentence: subject verb word(s)

  the various matched items will be stored in $item{'subject'},
  $item{'verb'}, and $item{'word(s)'} (i.e. *not* in $item{'word'},
  as it would have been in previous versions of the module).
  (thanks Anthony)

F


Re: cpan name spaces (was: Re: Re3: Re: How about class Foo {...} definition for Perl? )

2004-01-21 Thread Fergal Daly
On Tue, Jan 20, 2004 at 11:12:25PM -0600, david nicol wrote:
 Here's a controversial assertion:
 
 Just because Damian Conway does something that doesn't make it right.

It certainly doesn't but he's not alone in doing it.

Just to come clean, I was never really bitten by the Parse::RecDescent
change, it actually hit me very early on in development of my module so I
just switched to using 1.9x style without any hassle but it was over 2 years
between 1.80 and 1.90 so I could have been and I'd guess a lot of people
were bitten.

 I reccommend changing the name of the module when the interface
 changes, for instance I published Net::SMTP::Server::Client2
 instead of hijacking Net::SMTP::Server::Client and producing
 incompatible higher-numbred versions. (internally I've got a Client3
 as well, but it's not going to get published)
 
 In my opinion as soon as he broke compatability with something that
 people were actually using, he should have changed the name. 

That's what's necessary in the current scheme but good names are in short
supply so you end up with Client2, Client3, Client3_5 etc which is not so
nice and especially for things like Net::POP3.

Again, the result of gluing 2 strings together without a delimiter. This
also makes it hard for say search.cpan.org to make you aware that there is a
Client3 when you're looking at the Client2 page.

A better (IMHO) alternative is to make the interface part of the version
number as important as the name. This is equivalent to including it in the
name except you don't lose information like you do when you just glue a
number on the end of the name. You also get to use '.'s in the version
number because you're not try to make a valid Perl module name. Then CPAN
and other tools could understand the relationship between different versions
of modules.

Unfortunately, this is the bit I think will never happen, I don't think it
would be possible to convince people that this is worthwhile, possibly
because it's not worthwhile at this late stage.

So in the absence of the full solution perhaps we should urge people
towards sticking interface version numbers in the names of the modules. I've
done it privately too but I'm not convinced that CPAN should be littered
with My::Module, My::Module2, My::Module3 etc,

F


Re: cpan name spaces

2004-01-21 Thread Fergal Daly
On Tue, Jan 20, 2004 at 10:07:43PM -0500, David Manura wrote:
 In consideration of what Fergal said, should every public method or 
 function in a module be individually versioned?  So, when I do
 
   use Text::Balanced qw(extract_multiple extract_codeblock), 1.95;
 
 this could (under new semantics) assert only that those two functions have 
 the same interface and expected behavior as the corresponding functions in 
 module version 1.95.  If a future version of Text::Balanced (e.g. 1.96) 
 adds or changes the interface/behavior of other functions, my code will 
 still accept the new module.  Only when extract_multiple or 
 extract_codeblock themselves change interface/behavior would my code reject 
 a new module version.  There is no need for my code to provide an 
 acceptable version range; that is the module's responsibility to deduce.  
 (OO-like modules must be handled by a different mechanism.)

It may be worth it in some cases but perhaps if the functions are so
unrelated that they can change independently they should not be in the same
module. Making Text::Balanced::Multiple::extract() and
Text::Balanced::Codeblock::extract() would then allow you version them with
the module. There's nothing to stop you still making them available to
export from Text::Balanced.

 Consider further that another author comes out with a module named 
 Text::Balanced::Python having the same interface as Text::Balanced 1.95 but 
 whose extract_quotelike extracts Pythonic quotes rather than Perl-like 
 quotes (i.e. differing behavior).  I haven't considered how useful it would 
 be to express this relationship in the versioning metadata, but that might 
 be a further direction.  This resembles (OO) interfaces, but I believe the 
 versioning considerations make it different.

That is exactly what Java's etc interfaces do. So interfaces are what you
want rather than versions however it might be useful to be able to specify
the version of the interface.

This is getting very far away from anything that might realistically
happen...

F



Re: HTTP::Parser module

2003-12-14 Thread Fergal Daly
On Saturday 13 December 2003 20:39, David Robins wrote:
 parse() will return:
 0 on completion of request (call request() to get the request, call data() 
to 
 get any extra data)
 0 meaning we want (at least - may want more later if we're using chunked 
 encoding) that many bytes
 -1 meaning we want an indeterminate amount of bytes
 -2 meaning we want (at least) a line of data
 parse() will also accept undef as a parameter

That looks good. Is it ok to give less than n when the parser asks for n? 
Also, is it ok to give less than a line when the parser asks for a line? If 
not then every client will have write their own buffering code so they can 
build up the necessary length, it would be much better for the parser to 
handle that,

F



Re: New module Algorithm::Interval2Prefix

2003-12-01 Thread Fergal Daly
On Sun, Nov 30, 2003 at 05:17:10PM +0100, Lars Thegler wrote:
 Hi all,
 
 I've written a small module, encapsulating an algorithm that can generate a
 set of 'prefixes' (patterns that match the beginning of a numeric string)
 from an 'interval' (range) of integers. This is a problem often occurring
 working with telephony switching equipment or IP address subnetting.
 
 I've trawled CPAN to locate prior work with similar functionality, but to no
 avail.
 
 The POD is attatched below, and the module distfile can be fetched from
 
 http://lars.thegler.dk/perl/Algorithm-Interval2Prefix-0.01.tar.gz
 
 Question: Am I reinventing something here?

I never saw it before.

 Question: Is the namespace appropriate?

Looks ok to me although I'd prefer To rather than 2.

 Comments on code, style etc are welcome.

 Taking an interval as input, this module will construct the smallest set
 of prefixes, such that all numbers in the interval will match exactly
 one of the prefixes, and no prefix will match a number not in the
 interval.

You need to say something about the length of the number because as it
3000-3999 produces just 3 and there are a lot of numbers that start with 3
but which aren't in the interval. In the same vein, a mode that produces
a set of strings like

^3\d{3}$

which can be used directly with Perl's re-engine might be useful or even
something that turns

2900-3999 into a single regex string which will match if and only
if the number is in any of the intervals

^(?:29\d{2}|3\d{3})$

so instead of 

my @p = interval2prefix($lo, $hi);

my $found = 0;
foreach my $pref (@p)
{
if ($num =~ /^$pref/)
{
$found = 1;
last;
}
}

if ($found)
{
do stuff
}

you could just do

my $r = interval2regex($hi, $lo);

if ($num =~ /$r/)
{
do stuff
}

F


Re: name for a module that implements dynamic inheritance?

2003-10-30 Thread Fergal Daly
On Thursday 30 October 2003 18:24, Dave Rolsky wrote:
 Well, sort of.  It messes with the symbol table of the dynamically
 constructed child, which ends up with each parents methods.  I don't
 really want to do that.  I want to be able to have any of the intermediate
 classes call SUPER::foo() and have it do the right thing, which is my
 current stumbling block.

What is the right thing? Is it to call foo() in any other package besides 
the current one? If so this should be achievable with something like 

package BottomOfAll;

sub AUTOLOAD
{
my $meth = $AUTOLOAD =~ /::(.*?)$/;
my $call_pkg = caller();

my $pkg = ref $_[0];

# go through all the 
my $super;
for (@{$pkg.::ISA})
{
next if $_ eq $call_pkg; # don't want to end up back in the same method
last if $_ eq __PACKAGE__; # don't want to end up in the AUTOLOAD again
last if $super = $_-can($meth);
}

goto $super if $super;

croak qq{Can't locate object method SUPER::$meth};
}

This still has the potential for loops if a::foo and b::foo both call 
-SUPER::foo.

Of course the right thing could mean something very different...

F



Re: module to access w3c validator

2003-10-30 Thread Fergal Daly
On Thursday 30 October 2003 21:51, Struan Donald wrote:
  HTML::Validator::W3C
 
 Which is going to get confused with HTML::Validator and also I think
 you need to make sure people know it's a web thing.

Sorry, should have been

HTML::Validate::W3C

that way you're in a clean namespace. I knew one was free and the other 
wasn't, got them mixed up.

You said it wasn't going to be a web thing if the person has it installed 
locally, so it's not always webby. Or am I misunderstanding what you meant 
when you said it could use a local install of the validator. Maybe you meant 
you can point it to a local web server running the scripts? If so then how 
about

WebService::*::*::*

where *::*::* uses W3C, HTML and Validate in some order, the only requirement 
being that HTML and Validate are adjacent. 1-dimensional namespaces suck!

 Ah, but there will be. See the intial mail for details.

Since a lot of people have XML modules installed anyway, how about keeping it 
all in one distribution and just disabling the detailed functionality for 
those that don't have the required modules. You can mention, when Makefile.PL 
runs, that they will get the other functions if they install X, Y and Z,

F



Re: sub-packages for object-oriented modules

2003-10-05 Thread Fergal Daly
On Sunday 05 October 2003 17:23, Eric Wilhelm wrote:
  The following was supposedly scribed by
  Fergal Daly
  on Sunday 05 October 2003 06:54 am:
 
 That said, having a single package so full of stuff that you need to split
  it into sub files is often an indicator that you're doing way too much in
  one package anyway. It's possible you could benefit from mixin classes.
  That is classes which contain only methods and these methods make very few
 assumptions about about their $self object.
 
 I do have Get and Set methods which would allow functions like Move() to 
 operate without directly accessing the data structure, but they would still 
 have to know about the per-entity data structure (i.e. I could later change 
 where the entity is stored in the object, but the functions need to know 
 about some of the details of the entity.)  Is this enough separation?

It's not so much a question of is it enough?, it's more is it useful?. 
Mixin classes are useful in a situation when you have several different 
classes which share a set of methods but do not inherit from a common 
ancestor. For instance if you have Array and IndexedHash (a hash that can 
also be used like an Array) then if they have the same interface, you could 
write a mixin class Sortable and get both of them to inherit from it and 
voila you get 2 sort methods for the price of 1.

I'm not sure if this is relevant to your situation (I suspect not as you seem 
to only have 1 class of objects that you work with).

F



Re: what to do with dead camels ?

2003-08-04 Thread Fergal Daly
On Sunday 03 August 2003 17:45, Andy Lester wrote:
 There's a distro on CPAN now called lcwa that I would love to see
 disappear.  It's from 1997 and it's one of those distros that
 included all its necessary parts rather than rely on depencies.
 Unfortunately, those parts are 6 years out of date, but come up in
 searches on the modules.

 Do a search on search.cpan.org for HTTP::Response, a pretty common
 module.  The first hit that comes up is the one from lcwa, and if
 you're not paying attention to the distro name (or you're a relative
 newbie who doesn't realize he needs to), you're going to be looking
 at 6-year-old docs for the module.

Try Test::More, it's true home is Test::Simple but that's 5th on the list.

Can I suggest a change to the sorting algorithm for search.cpan.org when 
searching for a module or for docs

@sorted_distros = sort {
$a-oldest_version-release_date =
$b-oldest_version-release_date
} all_distros_containing(Module::Name);

Because chances are that if Distro::A includes a piece of Distro::B then 
Distro::B probably predates Distro::A. Of course that's not necessarily true, 
it's quite possible but that should be comparitively rare.

I think it doesn't fully solve the problem for Test::More but it might for 
some others

F



Re: Test::Deep namespace

2003-06-20 Thread Fergal Daly
On Friday 20 June 2003 20:21, Ken Williams wrote:
 Second, I find it very confusing that all these different capabilities 
 are happening inside one cmp_deeply() function.  In Perl it's much more 
 common to use the function/operator to indicate how comparisons will be 
 done - for example, = vs. cmp, or == vs. eq.  I would much rather see 
 these things broken up into their own functions.

I had a hard time trying to document this module and I wasn't sure I did a 
good job, now I'm certain I didn't! I hope I can explain in this email. It's 
a bit long but I hope you will see at the end that your comments are based on 
a misunderstanding of what Test::Deep does. I'd really appreciate it if you 
could tell me if it makes any sense to you or if it makes no sense at all I 
don't want to alienate users just because my docs are unintelligible. As a 
bonus, since you're @mathforum.org I'll throw in some non-well-founded set 
theory near the end ;-)

First off, the Test::Deep functions set(), bool(), bag(), re() etc are not 
comparison functions, they are shortcuts to Test::Deep::Set-new, 
Test::Deep::Bool-new, Test::Deep::Bag-new, Test::Deep::Regex-new. The 
objects they return act as markers to tell Test::Deep that at this point of 
the comparison to stop doing a simple comparison and to hand over control to 
Test::Deep::Whatever.

There's nothing you can do in regular expression that you can't do with substr 
and eq but regular expressions allow you to express complex tests in a simple 
form. That is the goal of Test::Deep. Perl has regexs that operate on a 
linear array of characters, Test::Deep supplies regular expressions that 
operate on an arbitrarily complicated graph of data and just as a regex often 
looks like the strings it will match, a Test::Deep structure should look like 
the structure it will match.

What's wrong with using Test::More::is_deeply()? Well, is_deeply is  just the 
complex-structure equivalent of eq for strings. is_deeply checks that two 
data structures are identical. What do you do if part of part of the 
structure you're testing is unpredictable? Maybe it comes from an outside 
source that your test script can't control, maybe it's an array in an 
undetermined order or maybe it contains an object from another module - you 
don't want your test to look inside other modules' objects because you have 
no way of telling if it's right or wrong. In these cases is_deeply() will 
fail and so is no use. Test::Deep::cmp_deeply() has varying definition of 
equality and so can perform tests that is_deeply can't.

Time for some examples.

Simple string case: Say you want to test a string that is returned from the 
function fn(). You know it should be big john. So you do

Test::More::is(fn(), big john, string ok);

Messy string case: Things change, now fn() returns a string that contains big 
john and some other stuff, you can't be sure what the other stuff is, all 
you know is that the string should be a number, followed by big john, 
possibly followed by some other stuff. No problem

Test::More::like(fn(), qr/^\d+big john.*/, string ok);

Now imagine that you have a function that returns a hash

Simple structure case: you want to test that fn() returns

{
age = 34,
id = big john,
cars = ['toyota', 'fiat', 'citroen'],
details = [...] # some horrible complicated object
}

Test::More::is_deeply(fn(), 
{
age = 34,
id = big john,
cars = ['toyota', 'fiat', 'citroen'],
details = [...] # some horrible complicated object
}
);

Messy structure case: same as above but say now the id is no longer simply 
big john, it's the same messy thing we talked about in the messy string 
case, and say you're no longer guaranteed that the cars will come back in any 
particular order because they're coming from an unorderd SQL query.

Test::is_deeply is no good now as it needs exact equality. You could write

my $hash = fn();
is($hash-{age}, 34);
like($hash-{id}, qr/^\d+big john.*/);
is_deeply(sort @{$hash-{cars}}, ['citroen', 'fiat', 'toyota' ]);
is_deeply($hash-{details}, [...]);
is(scalar keys %$hash, 4);

but you'd be s wrong because you've also got to check that all your 
refs are defined before you go derefing them so here's the full ugliness you 
really need

if( is(Sclar::Util::reftype($hash),  HASH) )
{
is($hash-{age}, 34);
like($hash-{id}, qr/^\d+big john.*/);

if( is(Sclar::Util::reftype($hash-{cars}),  ARRAY) )
{
is_deeply(sort @{$hash-{cars}}, ['citroen', 'fiat', 'toyota' ]);
}
else
{
fail(no array);
}
if( is(Sclar::Util::reftype($hash-{details}),  ARRAY) )
{
is_deeply($hash-{details}, [...]);
}
else
{
fail(no array);
}
}
else
{
for (1..6) # cos we don't want to mess up the plan!
{

Re: Test::Deep namespace

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 15:24, Andy Lester wrote:
 It would be nice if the functions ended in _ok, so it's clear that 
 they are actually outputting and not just returning booleans.

There is only 1 function really, all the rest are shortcuts to the 
constructors of various plugins. I suppose I could call it cmp_deeply_ok. Not 
sure if I like that too much though.

 I think that Test::Data might be a better place for them, somehow. 
 I'm maintaining brian d foy's Test::Data:: hierarchy, so maybe we can 
 figure something out.

Test::Data takes a totally different approach. With Test::Data::Hash you'd do 
something like

hash_value_false_ok(key1, $hash);
hash_value_true_ok(key2, $hash);
hash_value_false_ok(key3, $hash);
hash_value_true_ok(key4, $hash);

with Test::Deep you'd do

cmp_deeply($hash,
{
key1 = bool(0),
key2 = bool(1),
key3 = bool(0),
key4 = bool(1),
}
);

You build a structure that looks like the result you're expecting except 
sometimes instead of simple values you have special comparators which .

You can also do this

my $is_person = all(
isa(Person),
methods(
getName = re(qr/^\w+\s+\w+$/),
getMaritalStatus = any(single, married),
),
);

my $is_company = all(
isa(Company),
methods(
getName = re(qr/\w/),
getCEO = $is_person,
getDirectors = all($is_person),
),
);

cmp_deeply([EMAIL PROTECTED], all($is_company));

You can also make your definitions available to other modules so that when 
they run their tests they can check that they are getting good values back 
from you. It'd be nice to put this in the test code for my fictitious log 
handler,

use IO::File::Test qw( $opened_fh );

my $log_handler = Log-new($test_file);

cmp_deeply($log_handler,
methods(
getFileName = $test_file,
getEntriesCount = 0,
getFH = $opened_fh,
)
);

and that'll make sure that my file was opened correctly along with various 
other relevant tests,

F



Re: UDPM name space finalization

2003-06-01 Thread Fergal Daly
Sorry to bring this up again, I should have chased it more the last time but 
what exactly is UNIXy about about this module?

The reason given previously was that all the dialog programs run on UNIX. That 
seems fairly incidental, it's not like there can't be dialog be programs for 
windows, Mac, Amiga etc and quite possibly there are. I presume if The Gimp 
can be compiled on windows then surely gDialog could be and KDialog could 
probably be ported easily as it's based on the QT toolkit.

If I was searching for a dialog module on CPAN, unix would not be one of my 
search terms and if someone ever does write a backend for a windows dialog 
program then anyone who tries to find it could be confused by the UNIX and 
assume it won't work under windows.

I just don't see any fundamental UNIX connection. Is there a reason why this 
module could never work on anything else?

UI::Dialog::* seems like a much more apt prefix and as someone pointed out in 
another thread, there's nothing wrong with starting a new toplevel namespace 
as long as it makes sense and you don't hog the whole thing,

F