Re: =head1 SEEN ALSO BY

2015-04-03 Thread Jonathan Yu
Hey Paul,

This seems like an abuse of POD to me. Disadvantages include:

1. Someone can upload junk modules that force yours to link to it (this may
not be a problem if users are clear that mention of a module in perldoc is
not an endorsement); and
2. There's no support for this behaviour in perldoc or other tools; and
3. You have a similar problem if you develop a module that can optionally
be used with other modules. Granted, this seems much less common than your
proposed use case.

It seems more appropriate to allow module authors to annotate their
MetaCPAN module pages somehow, but then you run the risk of the annotations
becoming stale compared to the code, or having to manually re-do the
annotations each time you upgrade, or something like that. Are there any
insights that can be gleaned from AnnoCPAN?

Considering your specific use case, I also wonder if something like
javadoc's direct known subclasses annotation is possible (it seems this
may not be solvable in general due to the way Perl OO works, but maybe
there are patterns such that we can develop a good enough heuristic.)

An alternative (which can be done today) is to just link to a MetaCPAN
search covering that namespace; in your example, just to search for
everything under Tickit::Widget.

Cheers,

Jonathan

On Tue, Mar 31, 2015 at 12:29 PM, Paul LeoNerd Evans 
leon...@leonerd.org.uk wrote:

 Random musings from #perl on Freenode:

 If I write a base module that's intended for expansion/extension (such
 as Tickit), and then write lots and lots of extensions (see: the entire
 Tickit::Widget:: namespace), it would be nice as an end-user feature
 browsing module documentation, to be able to see a list of all those
 widgets when looking at the base. Sometimes module developers use a

   =head1 SEE ALSO

 However, for developers that's a terrible solution. It means the author
 of the base has to list all the extensions, which will naturally grow
 over time. As the base becomes more stable, it increasingly means the
 base does documentation-only releases that just add to that list.
 Also it's hard for third-parties to add it there.

 I wonder if therefore, metacpan needs some sort of back-reference
 ability? That if an extension module could somehow list

   =head1 SEEN ALSO BY

   LTickit

 then such a module declaring that would *automatically* get listed
 somehow in some automatically-generated and (near)-realtime updated
 part of the Tickit documentation when viewed via metacpan.

 Does this sound like something that might gain traction?

 --
 Paul LeoNerd Evans

 leon...@leonerd.org.uk
 http://www.leonerd.org.uk/  |  https://metacpan.org/author/PEVANS



Re: linux packaging for Alien::XXX modules

2014-01-19 Thread Jonathan Yu
Hey Chris,

I think this is actually the usual approach, although it does have a
cost for distributions as an extra package must be built separately.
The costs are to build infrastructure (more packages need to be
built), developer workload, and to mirrors (bandwidth). As an example,
there are 3 packages in Debian [0] that match libalien-(.*)-perl -
the small number of packages means that in practice, the cost isn't
too bad. Someone once pointed out to me that Debian in particular is
sometimes used in embedded computing scenarios, and that each package
increases the metadata in the Debian packages list, which must be
stored whether or not the package is installed (this is how Debian
identifies available packages). For each package, it's probably not a
big deal, but in the aggregate, it means more stuff can't be installed
or won't fit on a given CD or DVD.

It would be nice if we had a better solution (one that did not require
the cost of a dummy Alien module, which results in a pretty
empty-looking package on distributions with binary-only packages, such
as Debian). But, again, this seems to have been the usual approach we
followed historically.

I suppose one option is to have something similar to (or perhaps even
as an extension of) Module::Install, where the necessary code to find
a module is copied into inc/ and thus available at build time, but not
required to be installed. This code could then put the linker options
somewhere where the Perl module could pick them up later. I suppose
there are other options as well, like a shared library that can do
this sort of thing, I don't know. But given that that does not seem to
be a readily available option at the moment, I would suggest that yes,
you proceed with an Alien module. I would, however, suggest that you
try to coordinate with downstream distributions where you can/have
interest, as you might be able to make some simple changes now to make
their lives easier. Debian in particular has a few PDL modules in use
[1], though I'm not sure if your change would affect it.

Cheers,

Jonathan

[0] http://packages.debian.org/search?keywords=libalien
[1] http://packages.debian.org/search?keywords=libpdl

On Sun, Jan 19, 2014 at 4:25 PM, Chris Marshall devel.chm...@gmail.com wrote:
 I would like to factor out the explicit detection and configuration of
 the PDL build process on external libraries (such as PROJ4, HDF5,
 FFTW3,...) into corresponding Alien::PROJ4 or similar distributions.
 The job of these Alien::XXX modules is to check for XXX. or install
 XXX, and provide configuration to the perl module that does a 'use
 Alien::XXX'.  The Alien::XXX module would then provide the needed
 information on how to build against the XXX dependency for the perl
 module (usually an XS based module).

 It seems to me that the logical thing would be to have linux package
 dependencies for PDL on Alien::XXX and then the package for Alien::XXX
 would have a dependency on the underlying XXX library/dependency.
 This would be for a binary install.  Is this a reasonable approach
 from the considerations of linux packaging?

 --Chris


Re: Request for feedback: interface to 06perms.txt

2012-10-15 Thread Jonathan Yu
On Mon, Oct 15, 2012 at 6:59 AM, David Cantrell da...@cantrell.org.ukwrote:

 However, I'm a bit lairy of tests should not contact remote systems at
 all.


For what it's worth, Debian packagers prefer to have an environment
variable (i.e. TEST_INTERNET, HAS_INTERNET) be set before running any tests
that explicitly require a remote connection. This is because Debian's build
servers don't have Internet access; potentially some of the CPAN Testers
servers fall under the same category.

I would therefore suggest either disabling those tests unless
TEST_INTERNET/HAS_INTERNET is set, or using a shipped copy of that file for
tests and only updating the file if TEST_INTERNET/HAS_INTERNET is set.

This also gives developers working on the package to run the full battery
of tests (including those requiring Internet access) while also not
requiring build servers to have connectivity.


Re: The CPAN Morass

2011-12-05 Thread Jonathan Yu
What, no mention of LLVM/Clang? :-(

I have been meaning to try that myself.

I have also had great success using TCC (Tiny C Compiler) in the past,
which does x86 compilation.

Cheers,

Jonathan

On Mon, Dec 5, 2011 at 2:29 PM, Nicholas Clark n...@ccl4.org wrote:

 On Mon, Dec 05, 2011 at 09:42:30AM -0800, Linda W wrote:

 
  The assertion was that such a thing does not.  It is is incumbent
  upon you, who want to refute that
  assertion to provide at least 1 example to disprove the general
  assertion.   Claiming it is a research
  opportunity (because you don't know of any), is what i would expect of
  the average person cannot refute my stated position.   Is that your
  final answer?  ;-)
 
  Or would you like to get serious?

 Intel's icc is available for Linux (for x86 and x86_64, I assume)
 Sun's compiler is available for Linux (just for x86 and x86_64, I think)
 I've used lcc on Linux
 I've not tried clang on Linux

 That's 4 without trying, all of which I believe can be used in some cases
 without payment.

 Nicholas Clark



Re: The CPAN Covenant

2011-11-27 Thread Jonathan Yu
I try to stay out of these discussions (*cough* flame wars), but here
goes...

The relationship between CPAN and the Developer is bi-directional, no doubt.

As an organization, CPAN can, as it chooses, decide to do what you are
suggesting, remove software or boot a Developer. However, I do not believe
the prerogative of CPAN is to do so, since it would reduce the utility of
CPAN as a whole - the point of CPAN, and the advantage it confers to Perl
over other languages, is a central location for all software, that is
installable in a very straightforward manner. This is something that Java
is only recently starting to attempt to replicate (e.g. with Maven).

On the other hand, a Developer, as he/she chooses, can decide to accept or
reject patches, etc. The nature of open source is that Developers are often
too busy with other work (read: putting food on the table) to perhaps
commit to tracking down some very difficult-to-find (or even
unreproducible) bugs. On the other hand, as a user, you can leverage the
advantage of open source - the freedom to see and to modify the source code
- to benefit both yourself, and the community-at-large, by submitting
patches.

If you feel that the patches are taking too long to be applied, or they are
being rejected outright, it is your prerogative to fork the project. There
are many examples of this on CPAN already - e.g.
MooseX::Types::DateTime::ButMaintained. It should also be noted that some
projects that appear to have been abandoned have had their maintainership
rights assigned to others, who have thus taken over maintenance of a
package. You may contact the PAUSE Administrators to discuss these cases.

There are many other options that do not involve flaming the developer, who
contributed their work to the CPAN for the public interest, free of charge.
Should software be buggy? Of course not. But life is not perfect, and there
are always trade-offs to be made. At this time, I feel that I would be
remiss if I didn't also note that David Cantrell is a rather prolific CPAN
developer - his work is used by many other libraries, including some that
are packaged in Debian.

On the other hand, I guess your statements are more generally stated rather
than specifically levelled against Mr. Cantrell, in which case, I will say
that in general, open source software provides you, the user, the freedom
to:

1. Fork the project
2. Bribe the developer to fix it
3. Choose another suitable library
4. Complain on mailing lists
5. Fix the problem yourself, submit patches (possibly also #1 if upstream
is unresponsive)

I should note that trying #4 *might* work, but will *probably* alienate the
upstream developer and other developers as well. It certainly does not
serve your cause. If you are using a CPAN library and making money from it,
then I think it is fair that the upstream developers should be compensated
for their work - either by you paying them to prioritize your required
bugfixes, or by you contributing your work back to them (and the community)
in the form of patches.

Note that most licenses mention that the software is provided AS IS,
without any warranties, express or implied, INCLUDING fitness for a
particular purpose. That means that open source software might well not do
as advertised. It is the risk you take when you decide to benefit from
software that has been provided to the community in the public interest.

TL;DR: I am not aware of any case on CPAN where an author is intentionally,
and malevolently, attempting to sucker you into paying to make it work.

Cheers,

Jonathan

On Mon, Nov 28, 2011 at 1:14 AM, Linda W perl-didd...@tlinx.org wrote:

 David Cantrell wrote:

  On Thu, Nov 24, 2011 at 08:09:35AM -0800, Linda W wrote:

  Don't have to touch their code,... but if we want CPAN to be able to
 be relied upon.. it's can't have unaddressed bugs for months (let alone
 a year or more)...


 I promise to address bug reports quickly if you make them more important
 than everything else in my life.


 ---
If that's how unimportant CPAN is to you, then I'm sure you
 will have no problem if someone takes over maintenance of the module.

If it is a problem, maybe it will 'become more important to deal
 with that than everything else in your life...


  You can do that by paying me.


 ---
Well, i certainly can't enforce this, but whoever sponsors CPAN,
 if they want it to stay alive, ***IS*** paying you for the privilege of
 putting your code on CPAN and have reserved some name.

If you don't want to maintain your code on CPAN, then I would
 say the default policy should be to no longer 'pay you', in the currency
 you are receiving.

Oh, you want more than permanent fame on CPAN (if your code is
 good working), ... OTOH, if your code is doggie doodoo, and basically
 a way to sucker people in to pay you to make it work, um...   don't
 think that's what CPAN was designed to be  but things do change...
 and money corrupts 

Re: Is rt.cpan.org down?

2011-08-14 Thread Jonathan Yu
On Sat, Aug 13, 2011 at 10:08 AM, Shawn H Corey shawnhco...@gmail.com wrote:
 My browser just times out.  Is the site down?

For future reference, this site is handy for this purpose:
http://www.downforeveryoneorjustme.com/


Re: Is rt.cpan.org down?

2011-08-14 Thread Jonathan Yu
 I know it's down, the real question is has anyone heard why and how soon may
 it be back?

Oddly enough, the web site (downforeveryone...) says that the site is
down, however, I can access it.

Pinging rt.cpan.org [207.171.7.181] with 32 bytes of data:

Request timed out.
Reply from 207.171.7.181: bytes=32 time=92ms TTL=50
Reply from 207.171.7.181: bytes=32 time=93ms TTL=50

Ping statistics for 207.171.7.181:
Packets: Sent = 3, Received = 2, Lost = 1 (33% loss),
Approximate round trip times in milli-seconds:
Minimum = 92ms, Maximum = 93ms, Average = 92ms

Cheers,

Jonathan


Re: Config.pm and Makefile.PL

2011-03-22 Thread Jonathan Yu
 Does anyone have experience using elements from Config.pm to set up
 Makefile.PL for multi-system installation?

This happens with almost every large library-based module.

I'm not sure if you are asking about Makefile.PL as in
ExtUtils::MakeMaker or Module::Install based distributions. I don't
use either of those directly. I do use EUMM-based Makefile.PL's via
Dist::Zilla for some modules, but not any of the ones complicated
enough to require dealing with Config.pm.

I did this for Math::Random::ISAAC::XS:

http://cpansearch.perl.org/src/JAWNSY/Math-Random-ISAAC-XS-1.004/Build.PL

It was just to determine the correct compiler flags to use. Without
more details I'm afraid I can't be more helpful.

Cheers,

Jonathan


Re: my $self = shift

2010-09-13 Thread Jonathan Yu
On Mon, Sep 13, 2010 at 11:48 AM, Ryan Voots simcop2...@simcop2387.info wrote:
 sub foo {
  my $foo = shift;
  $foo = bar; }

 would not doing a copy for shift like that cause it to act like

 sub foo {$_[0] = bar} does?


Well, you know what they say: Try It And See.


Re: pause.perl.org: cert expired

2010-05-15 Thread Jonathan Yu
Hi Jim,

I just checked by logging into PAUSE now, using Windows XP and Firefox
3.5-ish. No warning or problems on my end. I checked the certificate
details, and they say:

Valid from:
11-05-2010 18:18.36
(11-05-2010 22:18.36 GMT)

Until:
10-05-2012 18:18.36
(10-05-2012 22:18.36 GMT)

I just took a look now and had no problem. Check the time and date
settings on your system.

Cheers,

Jonathan

On Fri, May 14, 2010 at 9:54 PM, James E Keenan jk...@verizon.net wrote:
 pause.perl.org
 Issued by: CAcert Class 3 Root
 Expires: May 10, 2012 6:18:36 PM EDT

 Who should be notified?

 Thank you very much.
 Jim Keenan



Re: Trimming the CPAN - Automatic Purging

2010-03-28 Thread Jonathan Yu
On Sun, Mar 28, 2010 at 12:55 PM, Dana Hudes dhu...@hudes.org wrote:
 But you can't use CPAN.pm on the Backpan.
Can't you? It's just a mirror, so if you point CPAN.pm to the backpan,
you should be able to install packages from there (though to get the
version you want you'll need to specify the author/package name
manually I think).

Of course, I've never done this myself, so I could be mistaken

 --Original Message--
 From: Shlomi Fish
 To: module-authors@perl.org
 Cc: dhu...@hudes.org
 Sent: Mar 28, 2010 11:31 AM
 Subject: Re: Trimming the CPAN - Automatic Purging

 On Sunday 28 Mar 2010 17:28:48 dhu...@hudes.org wrote:
 The entire point of rsync is to send only changes.
 Therefore once your mirror initially syncs the old versions of modules is
 not the issue. Indeed, removing the old versions would present additional
 burden on synchronization! The ongoing burden is the ever-growing CPAN.

 The danger in a CPAN::Mini and in removing old versions is that one is
 assuming that the latest and greatest is the one to use. This is false.
 Take the case of someone running old software. I personally support
 systems still running Informix Dyanmic Server 7.31 as well as systems
 running the latest IDS 11.5 build. We have Perl code that talks to IDS. If
 DBD::Informix withdrew support for IDS 7.31 I would need both the last
 version that supported it as well as the current.  I can get away with
 upgrading Perl, maybe, but to upgrade the dbms is much more problematic
 (license, for one thing; SQL changes another).

 You can always get the old versions from the Backpan, which keeps all
 historical versions - so it's a non-issue.

 Regards,

        Shlomi Fish

 --
 -
 Shlomi Fish       http://www.shlomifish.org/
 Best Introductory Programming Language - http://shlom.in/intro-lang

 Deletionists delete Wikipedia articles that they consider lame.
 Chuck Norris deletes deletionists whom he considers lame.

 Please reply to list if it's a mailing list post - http://shlom.in/reply .


 Sent from my BlackBerry® smartphone with Nextel Direct Connect


Re: Q about prerequisites

2010-03-25 Thread Jonathan Yu
Craig,

I havne't looked into your issue specifically but this looks like it
may be related to older CPAN.pm's which did not honour build_requires
and configure_requires. Subsequently, you probably have to do some
manual checking, though hopefully someone more experienced with doing
that sort of thing will speak up...

On Thu, Mar 25, 2010 at 12:58 PM,  cr...@animalhead.com wrote:
 I added LWP::UserAgent to a test, and thought I had it covered
 by adding LWP to Makefile.PL like this:
     'BUILD_REQUIRES' = {'LWP'           = 5.834,
                          'Test::More'    = 0},
 One of many smoke-test systems has a problem with this.
 http://www.cpantesters.org/cpan/report/6979525
 The report says Can't locate LWP/UserAgent.pm in @INC
 Which is the best way to fix this?
 1. Change 'LWP' to 'LWP::UserAgent' in BUILD_REQUIRES
 2. Add 'LWP' to PREREQ_PM
 3. Add 'LWP::UserAgent' to PREREQ_PM
 4. Add 'LWP::UserAgent' and 'Test::More' to PREREQ_PM
 5. #4 plus delete BUILD_REQUIRES
 It seems we have 3 categories: config_requires, build_requires, and
 requires, and it's not clear which of the last 2 applies to something
 required by a test.
 Please forgive if this has been asked and answered before.
 I've given up on my data mining skills.
 Thanks,
 cmac



Re: Consensus on MakeMaker vs. Module::Build vs. Module::Install?

2010-03-22 Thread Jonathan Yu
Speaking also as a Debian packager, and notwithstanding Dominique's
comments that we dislike Module::Install, I'd like to provide some
additional clarification:

On Mon, Mar 22, 2010 at 4:11 PM, Dominique Dumont domi.dum...@free.fr wrote:
 Module::Install raises a lot of problems downstream. I often hear Debian
 packagers complaining about it.
This is true, however, it's more about the fact that there are
multiple different versions of Module::Install floating around in our
packages (since they are embedded with each distribution, by its
nature). Subsequently, sometimes we've discovered bugs that we need to
add overrides for during building, cf.
http://pkg-perl.alioth.debian.org/debhelper.html

I think one other issue is that people are upset with the idea of
having to add copyright information for files in
inc/Module/{Install/*,Install.pm}, but this is largely mitigated with
the copy-and-paste stuff here, cf.
http://pkg-perl.alioth.debian.org/copyright.html

However, the Module::Install maintainers have been on friendly terms
with the pkg-perl group and I know of no outstanding issues we have
with its usage. If there are, then it's incumbent upon us to report
bugs against Module::Install accordingly, and coordinate with them to
find a solution that makes everyone happy.

 Module::Build works fine and does not raise such problems.

Module::Build's new bundling feature provides the same advantages and
many of the same disadvantages involved with Module::Install. However,
I have not seen its usage really out in the wild as of yet.

Cheers,

Jonathan


Re: Excluding files from CPAN diff

2010-02-18 Thread Jonathan Yu
Eric,

Thanks for this! I've been looking for a way to diff things where CPAN
Diff complains that things are too big for quite some time.

On Thu, Feb 18, 2010 at 2:26 PM, Eric Wilhelm enoba...@gmail.com wrote:
 When the pink unicorn goes away, you might try:

  http://github.com/gitpan

Mr. Schwern is pretty awesome!


Re: Spam to CPAN Developers? (Fwd: Betonmarkets CTO position)

2010-02-13 Thread Jonathan Yu
2010/2/13 Burak Gürsoy burakgur...@gmx.net:
 -Original Message-
 From: Hans Dieter Pearcey [mailto:hdp.perl.module-auth...@weftsoar.net]
 Sent: Wednesday, February 10, 2010 7:24 PM
 To: module-authors
 Subject: Re: Spam to CPAN Developers? (Fwd: Betonmarkets CTO position)

 Excerpts from Jonathan Yu's message of Wed Feb 10 12:20:51 -0500 2010:
  Has anyone else got a message like this to their CPAN Developer e-mail
  address? I'm curious if this is the beginning of a really bad trend
  toward CPAN author spamming :/

 No, it is a continuation of that trend.  It started years ago.

 Ah, yes... With the invention of the internets I believe...
Strange, I guess I haven't seen most of them due to Gmail's spam
filtering. Sorry all for the noise.

 hdp.




Spam to CPAN Developers? (Fwd: Betonmarkets CTO position)

2010-02-10 Thread Jonathan Yu
Hi,

Has anyone else got a message like this to their CPAN Developer e-mail
address? I'm curious if this is the beginning of a really bad trend
toward CPAN author spamming :/


-- Forwarded message --
From: Jean-Yves Sireau j...@regent-markets.com
Date: Wed, Feb 10, 2010 at 9:18 AM
Subject: Betonmarkets CTO position
To: jaw...@cpan.org


Dear Jonathan,

Betonmarkets.com, the leading financial betting company, is looking to
recruit a CTO.  As a Perl expert, I was wondering whether you may be
interested in the position?

The Betonmarkets website and underlying systems are developed entirely
in Perl and typically conduct 20,000 transactions per day. We are
looking for a CTO who is expert and enthusiastic about Perl, as well as
experienced in management and team leadership, and able to assume the
role of CTO of a successful e-commerce company.

We are located in Cyberjaya, Malaysia, which offers a high quality as
well as low cost of living.  We are a multi-national company, with
staff from 14 countries (including the US, Europe, and Asia).  Our
company, and its location in Cyberjaya, offer a unique living and
working experience for expatriates.

If you would be interested to know more about this position, kindly
email me your CV.  Please feel free to forward this email to any person
in the Perl community who may be interested in the opportunity.

Best regards,
Jean-Yves Sireau

--
Jean-Yves Sireau, CEO
Regent Markets Group Ltd.
Genseq Ltd.


Re: satiating cpantesters

2009-12-15 Thread Jonathan Yu
On Tue, Dec 15, 2009 at 2:46 AM, Eric Wilhelm enoba...@gmail.com wrote:
 # from Burak Gürsoy
 # on Monday 14 December 2009 12:20:

Well... Either die OS unsupported\n is an exception (since I get NA
 for that)

 Yeah.  Makes me wonder why fatal m/^Unsupported configuration: .*/
 errors couldn't be made NA.
Maybe a wiser idea is to require Devel::AssertOS or whatever module it
was, to make sure your module will build on the appropriate platform.
Thus, if your assertion can't be installed, your module build will
fail due to NA wouldn't it?

 --Eric
 --
 Issues of control, repair, improvement, cost, or just plain
 understandability all come down strongly in favor of open source
 solutions to complex problems of any sort.
 --Robert G. Brown
 ---
    http://scratchcomputing.com
 ---



Re: Module released - what now?

2009-11-24 Thread Jonathan Yu
Hm. This looks like spam, as it's an identical message but sent from a
different mailing address.

On Mon, Nov 23, 2009 at 3:11 PM, Ford Prefect Jr.
ford.prefect...@gmail.com wrote:
 Hi!

 While i'm a long term perl user (five, six years or so), i just got to
 upload my first module to PAUSE.

 The module is called Maplat and is a framework for intranet web development.
 But thats not my question, actually.

 Ok, so now i uploaded my first module with some 50 or some packages/.pm
  files and stuff. What actually happens next?

 While i read the many documentations and FAQ's, i'm not clear about the next
 steps. How long will it take to index the module so it's visible on
 search.cpan.org? How long, until it's available through my CPAN shell?

 How does that cpantesters stuff work? It's an automatic thingie, right? I
 know i don't have enough test scripts (i actually only test if the thing
 loads at all - i'm not clear how i'm gonna test the framework without a
 postgresql database and without someone clicking an complaining that all
 looks wrong).

 I know this questions will probably sound stupid (especially coming from
 someone with years of perl experience), but i'd rather learn from you than
 die stooopied ;-)

 LG
 Rene



Re: Module uploaded - whats next?

2009-11-23 Thread Jonathan Yu
Hi!

Welcome, and thanks for your contribution to the CPAN.

On Mon, Nov 23, 2009 at 3:16 PM, Rene Schickbauer
rene.schickba...@gmail.com wrote:
 Hi!

 While i'm a long term perl user (five, six years or so), i just got to
 upload my first module to PAUSE.

 The module is called Maplat and is a framework for intranet web development.
 But thats not my question, actually.
Great! Sounds interesting...

 Ok, so now i uploaded my first module with some 50 or some packages/.pm
  files and stuff. What actually happens next?

 While i read the many documentations and FAQ's, i'm not clear about the next
 steps. How long will it take to index the module so it's visible on
 search.cpan.org? How long, until it's available through my CPAN shell?

It should index soon and be visible on search.cpan.org within a day or
two. It might take a few days to get available via the CPAN shell.
Some mirrors are on a fast update schedule, and will have your module
within 30 seconds of the upload to PAUSE. cpan.hexten.net,
cpan.dagolden.net and cpan.cpantesters.net are three mirrors I know of
that have this feature.

Other mirrors should get your module within a few days, so not to worry.

 How does that cpantesters stuff work? It's an automatic thingie, right? I
 know i don't have enough test scripts (i actually only test if the thing
 loads at all - i'm not clear how i'm gonna test the framework without a
 postgresql database and without someone clicking an complaining that all
 looks wrong).

CPANTesters is an automated service. Many people run smoke servers
which test all uploaded PAUSE modules, by building them and reporting
to the central CPAN Testers database. It does this by sending emails
with test results. The standard CPAN shell and CPANPLUS have plugins
to report users' build/test logs to the CPAN Testers service as well.

Of course, without any test scripts, your reports will probably all
just PASS. You'll need to write better tests (and improve your test
coverage) in order to get meaningful results, so I hope you do that
soon.

 I know this questions will probably sound stupid (especially coming from
 someone with years of perl experience), but i'd rather learn from you than
 die stooopied ;-)
Everyone has been there at some point. Good luck!

 LG
 Rene


Cheers,

Fellow PAUSE developer, JAWNSY (and FREQUENCY)


Re: flame bait: execution speed Perl vs. C (Date::Calc::PP vs. Date::Calc::XS)

2009-11-18 Thread Jonathan Yu
Steffen-

As always, I think benchmarks are important. As you've shown below, in
your case, the XS implementation certainly is faster. I think it all
depends on whether the speed of the system is bound by external
factors (like disk speed, speed of a network stream) or your CPU.

Certainly I've found for tight loops with lots of calculations, XS/C
is going to be faster. Why? Because it's compiled into machine code
and executed directly on the chip. On the other hand, Perl is compiled
into bytecode which is then executed by the Perl Virtual Machine.

However, this means you are also more prone to do things (in C/XS)
that will result in nasty problems like segfaults.

I see this debate as the same as Perl vs C or Language X vs Language
Y. Each language and each system is built for different purposes with
different advantages as well as limitations.

What this email reminds us all is of the importance of benchmarking
your code. Personally I use the one that gives tables comparing
things. It's useful to see just how much faster an XS module is, since
it does carry some additional risks (like nasty segfaults) over the
PurePerl equivalents.

Cheers,

Jonathan

On Wed, Nov 18, 2009 at 7:08 AM, O. STeffen BEYer ost...@gmail.com wrote:
 Dear Module Authors,

 recently in one of the Amsterdam Perl Mongers meetings the question came up
 how much faster actually the XS version of Date::Calc (Date::Calc::XS) was
 as compared to the Pure Perl version (Date::Calc::PP).

 Here is the answer (see attached script - you will need to have Date::Calc
 6.3 and Date::Calc::XS 6.2 installed to run this script successfully):

 FreeBSD 7.2.-stable:

 $ perl benchmark.pl
 Running under Date::Calc::PP version 6.3
 timethis 5000: 17.3147 wallclock secs (17.24 usr +  0.06 sys = 17.30 CPU) @
 288.94/s (n=5000)
 Running under Date::Calc::XS version 6.2
 timethis 5000: 1.02551 wallclock secs ( 0.97 usr +  0.06 sys =  1.03 CPU) @
 4848.48/s (n=5000)

 Windows XP SP3:

 Running under Date::Calc::PP version 6.3
 timethis 5000: 17.1034 wallclock secs (16.86 usr +  0.00 sys = 16.86 CPU) @
 296.58/s (n=5000)
 Running under Date::Calc::XS version 6.2
 timethis 5000: 1.3329 wallclock secs ( 1.28 usr +  0.00 sys =  1.28 CPU) @
 3900.16/s (n=5000)

 Another (faster) Windows XP SP3 machine:

 Running under Date::Calc::PP version 6.3
 timethis 1: 20.5605 wallclock secs (20.55 usr +  0.00 sys = 20.55 CPU) @
 486.69/s (n=1)
 Running under Date::Calc::XS version 6.2
 timethis 1: 1.44224 wallclock secs ( 1.42 usr +  0.00 sys =  1.42 CPU) @
 7032.35/s (n=1)

 One can see from these results that the XS version quite consistently runs
 approximately about 15 times faster than the PP version.

 The test script benchmarks a function which calls all functions in
 Date::Calc once, each.

 Other similar results from XS/PP pairs of modules would be interesting -
 maybe not for any practical purposes, but just for the fun of it
 (or maybe to give deciders convincing arguments to upgrade to an XS version
 or not).

 Cheers,
 Steffen



Re: Why you don't want to use /dev/random for testing

2009-11-11 Thread Jonathan Yu
I should note, I wrote an article on this awhile back. Take it with a
grain of salt, as I'm not an expert in the area; I just wrote bindings
for the ISAAC algorithm to Perl.

http://jawnsy.wordpress.com/2009/06/04/performance-of-mathrandomisaac/

It compares the performance of multiple different PRNG modules (code
for this is in the Math::Random::ISAAC examples/ directory). I've
included charts of the distributions generated by them and benchmarks
of course.


Re: Why you don't want to use /dev/random for testing

2009-11-11 Thread Jonathan Yu
I'm not sure how many of these modules use it -- in particular, I know
Math::Random::ISAAC only mentions it in POD. Using /dev/random isn't
very portable -- what happens when you're on Windows?

On Wed, Nov 11, 2009 at 2:15 PM, David Golden xda...@gmail.com wrote:
 On Tue, Nov 10, 2009 at 10:01 PM,  cr...@animalhead.com wrote:
 Many of you know that the random number generator /dev/random
 is subject to delays when it has not accumulated enough entropy,
 which is to say randomness.  These delays are said to be longer
 on Linux /dev/random that on some other Unices.  They occur
 particularly after a system is booted, which I hear is a regular
 occurrence on some smoke-test systems.

 FWIW, I did a visitcpan scan of distributions that match the string
 /dev/random/ in lib/.  No guarantees that they actually *use* it --
 they might just mention it in comments/docs, but here's a watch list
 that might need further exploration:

 ADAMK/Crypt-DSA-1.16.tar.gz
 AWKAY/Apache/Apache-SiteControl-1.01.tar.gz
 AWKAY/Apache2/Apache2-SiteControl-1.05.tar.gz
 BOBTFISH/Catalyst-Plugin-Session-0.29.tar.gz
 CHROMATIC/Crypt-CipherSaber-1.00.tar.gz
 CZBSD/Myco-0.01.tar.gz
 DAGOLDEN/Getopt-Lucid-0.18.tar.gz
 DMALONE/Crypt-IDA/Crypt-IDA-0.01.tar.gz
 DMUEY/Data-Rand-0.0.4.tar.gz
 FLORA/Net-SSLeay-1.35.tar.gz
 FREQUENCY/Math-Random-ISAAC-1.001.tar.gz
 JDHEDDEN/Math-Random-MT-Auto-4.13.00.tar.gz
 JDHEDDEN/Math-Random-MT-Auto-6.14.tar.gz
 JHOWELL/FAQ-OMatic-2.717.tar.gz
 JHOWELL/FAQ-OMatic-2.719.tar.gz
 JMASON/IPC-DirQueue-1.0.tar.gz
 MBROOKS/String-Urandom-0.10.tar.gz
 MSCHOUT/Apache-AuthTicket-0.90.tar.gz
 MUIR/modules/Qpsmtpd-Plugin-Quarantine-1.02.tar.gz
 NUFFIN/Crypt-Random-Source-0.03.tar.gz
 OPI/Apache2-POST200-0.05.tar.gz
 PTANDLER/PBib/Bundle-PBib-2.08.01.tar.gz
 PTANDLER/PBib/Bundle-PBib-2.08.tar.gz
 SMUELLER/Statistics-Test-RandomWalk-0.01.tar.gz
 SMUELLER/Statistics-Test-Sequence-0.01.tar.gz
 SOMMERB/Myco-1.22.tar.gz
 VIPUL/Crypt-Random-1.25.tar.gz
 ZEFRAM/Data-Entropy-0.005.tar.gz

 -- David



Re: How to best detect availability of C compiler in Makefile.PL?

2009-10-13 Thread Jonathan Yu
I think ExtUtils::CBuilder is useful for detecting the presence of a
compiler. It has a have_compiler method that tells you whether a
compiler is present (by actually trying to compile a C file). For more
advanced uses, something like the aforementioned Devel::CheckLib will
probably be of great help.

What I've done (so that users have a choice) is provide alternate
implementations using Pure Perl and XS, and then have the Perl version
load the XS one if it's available (using eval { require ... })

It's good because
a) Lots of modules already do this
b) It means updates to one (say, fixing a bug in the Perl version)
doesn't require re-releasing of both
c) It means if someone has a C builder present but wants to use the
Perl version anyway (for whatever reason), they can do so.

You can look into Math::Random::ISAAC (pure perl) and
Math::Random::ISAAC::XS for these. The code is rather small.

On the other hand lots of other packages (version.pm for example) do
the whole Perl/XS selection t build-time. You may want to go that
route, too, and there are legitimate reasons for that choice.

On Tue, Oct 13, 2009 at 2:13 PM, Bill Ward b...@wards.net wrote:
 The Template Toolkit handles this with a command line option to Makefile.PL
 where you can build the toolkit with compiled code if the compiler is
 available.  I don't think it has automatic detection as well, but it might.
 In any case you might want to consult with the Template Toolkit authors.
 Their mailing list is:
 http://mail.template-toolkit.org/mailman/listinfo/templates

 If you come up with a better solution, Template Toolkit might want to use it
 as well.

 On Tue, Oct 13, 2009 at 3:17 AM, O. STeffen BEYer ost...@gmail.com wrote:

 Dear Perl module authors,

 what would be the best way to detect whether a working C compiler is
 available at build time of a module (i.e., in Makefile.PL)?

 I would like to install a (faster) XS version of a module if that is the
 case, and a (slower) pure-Perl implementation if not.

 Remember that C compilers are not always available on all systems.
 Sometimes they cost heavy extra money, or sometimes you have to work with
 what's there on a customer's or provider's server (where frequently
 installing a C compiler is not an option due to company policies).

 Thank you!

 Best regards,

 Steffen Beyer

 http://www.engelschall.com/u/sb/download/

 http://search.cpan.org/author/STBEY



 --
 Check out my LEGO blog at http://www.brickpile.com/
 View my photos at http://flickr.com/photos/billward/
 Follow me at http://twitter.com/williamward



Re: Term::Info - takeover

2009-09-24 Thread Jonathan Yu
On Thu, Sep 24, 2009 at 4:43 PM, Paul LeoNerd Evans
leon...@leonerd.org.uk wrote:
 I notice that Term::Info was released in 1999, has no documentation, no
 testing, only wraps Tput, and is generally not all that useful.

 I'm planning to write a proper terminfo wrapping module anyway, and this
 seems an ideal name to give it.

 CPANTS claims nothing is using it:

  http://cpants.perl.org/dist/used_by/Term-Info

 If I were to create another one which is more useful, providing more
 access to terminfo information, how might I go about creating a release
 of it?
I trust you know that releasing under the same name would be marked as
an unauthorized release by the indexer, unless you manage to convince
the original author or the PAUSE Admins to grant you that namespace.

I suggest you first name the module something like, Term::Info::More
to indicate that it is Term::Info with some additional features. Then
you can petition for the Term::Info namespace if you so choose, at a
later date, though that probably won't be necessary or desired.

 --
 Paul LeoNerd Evans

 leon...@leonerd.org.uk
 ICQ# 4135350       |  Registered Linux# 179460
 http://www.leonerd.org.uk/



Re: What Would you like to see in a CPAN Distro Manager?

2009-09-04 Thread Jonathan Yu
2009/9/4 Burak Gürsoy burakgur...@gmx.net:
 -Original Message-
 From: Shlomi Fish [mailto:shlo...@iglu.org.il]
 Sent: Friday, September 04, 2009 8:25 AM
 To: module-authors@perl.org
 Subject: What Would you like to see in a CPAN Distro Manager?

 I think that I don't want one :) (I didn't say don't write it)

 Hi all!

 I posted an entry to my weblog about What would you like to see in a
 CPAN
 distribution manager?

 http://community.livejournal.com/shlomif_tech/32348.html

 [2. POD at the end]

 I was doing that before PBP, it's beter to leave the code alone and send the 
 docs after end IMHO. Since I know that Pod is after that I can get it without 
 parsing with Pod::Simple or I can just do [SHIFT]+[PgDown] to get all of it. 
 Or maybe I just want to add something before =cut (which I do when building 
 distros) and I stil don't need to parse Pod to do that. Inline Pod is evil. I 
 hate it everytime I see it. Makes it hard ro read code too. But again, just 
 IMHO  $0.02.
I think the problem with doing that, for me, is that I'd forget to
update stuff and then my docs would become even more out of sync with
my code, which is a pretty bad thing. It's bad enough people don't
update comments that are right next to the code in question; when it
comes to docs... I imagine it'd get even worse.

Redoing stuff for the purposes of making things easier for a computer
seems backwards to me. After all, computers are supposed to do work
for /us/ -- they're faster, more consistent, etc. So leave programmers
to do what programmers do, and let the computers worry about
extracting inline POD. :-)

 [5 Support for more software licences]

 I think leaving this to Software::License is a better choice.

 [7 Good integration with the underlying version control system]

 I don't usually add everything in the repo to MANIFEST. So, unwanted stuff 
 does not goto CPAN until I want them to. I also don't add revision comments 
 automatically to the Changes file since it'll include a lot of unnecessary 
 noise.

 I cover there the introduction of Dist-Man (short for Dist-Manager),
 some
 ideas I have for its future, and request further ideas and insights.

 I should note that Dist-Man takes a different approach from Dist-Zilla.
 While
 Dist-Zilla removes redundant code from the ultimate sources and inserts
 it
 automatically before the release (which may skew the line numbers),
 Dist-Man
 will manipulate actual lib/**/*.pm, etc. sources in-place, which will
 allow
 them to be modified and corrected.

 Have a nice weekend.

 Regards,

       Shlomi Fish

 --
 -
 Shlomi Fish       http://www.shlomifish.org/
 Parody on The Fountainhead - http://shlom.in/towtf

 Chuck Norris read the entire English Wikipedia in 24 hours. Twice.




Re: gentle nudgings about upgrading your tools

2009-08-28 Thread Jonathan Yu
On Fri, Aug 28, 2009 at 1:25 PM, Bill Wardb...@wards.net wrote:
 On Fri, Aug 28, 2009 at 9:02 AM, Eric Wilhelmenoba...@gmail.com wrote:
 # from David Cantrell
 # on Friday 28 August 2009 04:10:

 I guess maybe.  It still seems arbitrary, and my point was that it
 is a workaround to the fact that it's currently difficult for a
 module to do the right thing to even compare its version against
 the index.

I'd restrict it to only those modules that are needed to install
 stuff:

  CPAN.pm
  ExtUtils::MakeMaker
  Module::Build
  CPANPLUS

 We've already solved the 'install-side' need with configure_requires.

 I was talking about the `./Build dist` (author) side of things (from the
 observation that the OP had run into a bug which would have been
 avoided by upgrading M::B before rolling a dist.)

 Authors using an old Module::Build won't be releasing dists with M::B in
 configure_requires until they upgrade.  That might happen automatically
 if they install some new code from the CPAN which has M::B in its
 configure_requires, but that's a combination of happy accidents.

 And, if we were to pretend that M::B author tools were split off into a
 separate distribution, having CPAN.pm warn you about a new M::B
 wouldn't do any good, plus people would be confused when their `./Build
 dist` suddenly started complaining about needing to install something
 extra (which brings me back to the bit about users setting a preference
 about which CPAN client to use.)

 Can PAUSE detect the version of M::B or EU::MM that the author used,
 and warn accordingly or even reject it?
Well, one can use the META.yml information about what software
generated it, but it's not entirely accurate. I believe this was
something Eric and others took a look at, in order to notify authors
with old toolchains.



Scope::Guard and End

2009-08-22 Thread Jonathan Yu
[Cc'ing module-authors in case other CPAN developers or users of these
packages are interested.]

Hi:

I've noticed these two modules (Scope::Guard and End) are similar in
their nature, purpose and inner workings. I'm curious if you two knew
about the others' module prior to creating their own. I'd say since
the end goal looks the same, combined maintenance of a single package
would probably be more effective, but of course that's up to you.

As a person that maintains both packages in Debian, it would
definitely be easier for us to cut down on packages, especially if
they have such similar functionality. As a CPAN author as well I think
it'd be useful in general to have shared/reusable code rather than
duplicating effort, but that of course is up to both of you, and
subject to other factors like deciding what might be a mutually
acceptable license for both of you.

Thanks for working on such a neat module and contributing it to CPAN.

Cheers,

Jonathan


Re: Failed: PAUSE indexer report KTHAKORE/SDL_Perl-v2.2.1.tar.gz

2009-08-21 Thread Jonathan Yu
Hi Kartik:

Keep in mind that all modules are tracked separately. So, while you
might have rights to teh SDL.pm namespace, the original author needs
to give you the co-maintain bit on all of the other ones in the dist.
This is what leads to what is called unauthorized releases -- your
package was accepted and is being actively mirrored on CPAN, but it is
not downloadable via the CPAN shell. (Well, it is, but you'd need to
specify your userid like: install K/KTHAKORE/Dist-Name.tar.gz)

Please read this page
(https://pause.perl.org/pause/authenquery?ACTION=pause_04about), which
is an FAQ list of common gotchas when it comes to authoring Perl
modules. Causing lots of noise on the authors list like this while
clearly not having read the appropriate documentation beforehand, and
doing so in such an unprofessional manner, is a rude thing to do and
unlikely to get you the result you are looking for.

Things are tracked per-module-name rather than per-package because
packages often contain many things. The first person to upload a
package with a specific name has dibs on it, which is useful for a
variety of reasons discussed on the FAQ page. Again, Read The Fine
Manual. Ask the author for co-maint bits on the remaining dist
packages. You can easily see what the author has permissions on here:
https://pause.perl.org/pause/authenquery?pause99_peek_perms_by=apause99_peek_perms_query=DGOEHRIGpause99_peek_perms_sub=Submit

Cheers,

Jonathan

On Fri, Aug 21, 2009 at 10:38 PM, Kartik
Thakorethakore.kar...@gmail.com wrote:
 What the hell? I thought I had co-maintain permissions for SDL_perl. Only
 SDL.pm was allowed up ? What is going on?

 On Fri, Aug 21, 2009 at 9:48 PM, PAUSE upl...@pause.perl.org wrote:

 The following report has been written by the PAUSE namespace indexer.
 Please contact modu...@perl.org if there are any open questions.
  Id

               User: KTHAKORE (Kartik Thakore)
  Distribution file: SDL_Perl-v2.2.1.tar.gz
    Number of files: 136
         *.pm files: 39
             README: SDL_Perl-v2.2.1/README
           META.yml: SDL_Perl-v2.2.1/META.yml
        YAML-Parser: YAML::XS 0.32
  META-driven index: yes
  Timestamp of file: Sat Aug 22 01:47:07 2009 UTC
   Time of this run: Sat Aug 22 01:48:36 2009 UTC

 Status of this distro: Permission missing
 =

 The following packages (grouped by status) have been found in the distro:

 Status: Permission missing
 ==

     module: SDL::App
    version:
    in file: lib/SDL/App.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Cdrom
    version:
    in file: lib/SDL/Cdrom.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Color
    version:
    in file: lib/SDL/Color.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Constants
    version:
    in file: lib/SDL/Constants.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Cursor
    version:
    in file: lib/SDL/Cursor.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Event
    version:
    in file: lib/SDL/Event.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Font
    version:
    in file: lib/SDL/Font.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::MPEG
    version:
    in file: lib/SDL/MPEG.pm
     status: Not indexed because permission missing. Current registered
             primary maintainer is DGOEHRIG. Hint: you can always find
             the legitimate maintainer(s) on PAUSE under View
             Permissions.

     module: SDL::Mixer
    version:
    in file: lib/SDL/Mixer.pm
     status: Not indexed because permission missing. Current 

Re: Access to bug queue

2009-08-18 Thread Jonathan Yu
Hi:

On Tue, Aug 18, 2009 at 4:57 PM, Kartik Thakorethakore.kar...@gmail.com wrote:

 Hi people how do I get access to bug queue if I cannot find the contact of
 the person who set it up. This is the rt on the cpan module.
Access is given to the PAUSE ID of the person that owns the particular
module, either on a first-come basis or based on who owns the
namespace (ie via registration in the module list).

I believe you need an account on the RT, using your email address. I
can't remember how I set that up for my dists. I use a Bitcard account
with my CPAN ID (freque...@cpan.org) to log in.

From the main page of http://rt.cpan.org:
... manage bugs in your distributions?

To work with bugs, every module author with a PAUSE account can log
into rt.cpan.org with their PAUSE userid and password. If you can't
log in or distributions you maintain aren't listed, please write to
rt-cpan-ad...@bestpractical.com at your earliest convenience.

You can't get access to the queue unless you own the module, so if
someone has gone MIA and you'd like to take over the namespace then
e-mail the author and Cc modu...@perl.org to make your plea with the
PAUSE admins.

Good luck, I hope this helps.

Cheers,

Jonathan
 Kartik Thakore
 Begin forwarded message:

 From: David Goehrig d...@nexttolast.com
 Date: August 18, 2009 3:50:53 PM GMT-04:00
 To: Kartik Thakore thakore.kar...@gmail.com
 Subject: Re: Access to bug queue

 hrmmm... Wayne was the one who set that up so I don't even know how :)

 Dave

 On Tue, Aug 18, 2009 at 1:30 PM, Kartik Thakorethakore.kar...@gmail.com
 wrote:

 Can I get access to the sdl bug queue so that I can close and update

 information on them.

 Kartik Thakore




 --
 -=-=-=-=-=-=-=-=-=-=- http://blog.dloh.org/



CPAN Search/BACKPAN Diff Tool

2009-08-03 Thread Jonathan Yu
Hi:

I'm not sure where else to send this, so I'm sending it here to get
some discussion from other module authors/CPAN users.

I package Perl modules for Debian, so I use the CPAN Search Diff Tool
a lot to figure out what has changed between releases. However,
authors are (rightfully so) cleaning up their directories and removing
old modules that aren't useful anymore. While that's great for the
mirrors, it means I cannot use the CPAN Search Diff Tool to get diffs
of packages.

Another limitation I've come across is that the CPAN Search Diff tool
sometimes aborts for long diffs (probably to avoid a denial of service
issue where someone runs really long diffs).

I would ideally like to implement such a tool on the entire BACKPAN,
so that we can get diffs of any packages. Since it would be presumably
lower traffic than the normal CPAN Search tool, the CPU time limits
could be bumped up and thus allow for the huge diffs I sometimes see.

Does anyone know who currently maintains the CPAN Search stuff? (I
heard Graham Barr but I'm not sure) Is it something we can implement
on BACKPAN? Would these be good ideas?

Cheers,

Jonathan


Re: CPAN Search/BACKPAN Diff Tool

2009-08-03 Thread Jonathan Yu
On Mon, Aug 3, 2009 at 7:01 PM, David Cantrellda...@cantrell.org.uk wrote:
 On Mon, Aug 03, 2009 at 12:49:41PM -0400, Jonathan Yu wrote:

 ... CPAN Search Diff tool ...

 I would ideally like to implement such a tool on the entire BACKPAN,

 Why not just take a backpan mirror, untar stuff, and use diff -r?
This works, but is less than ideal, and I don't end up with nice
graphics as I do with the web interface. The best thing about the web
interface is I get the same sort of output I would expect with a
graphical diff tool, but in my browser. I usually have a Firefox tab
open to that diff while working on the related package in a ssh
session.

 --
 David Cantrell | Nth greatest programmer in the world

 PERL: Politely Expressed Racoon Love



Re: Getting ready for CPAN

2009-08-02 Thread Jonathan Yu
Hi there:

On Sun, Aug 2, 2009 at 9:49 AM, Shawn H. Coreyshawnhco...@gmail.com wrote:
 Hi,

 I have a module, Sub::Starter, which I have prepared for CPAN.  I think I
 have all the right goodies in the right places.  But I am worried that I
 might be missing something important.  And yes, I read the documentation,
 that is, what I could find.  The thing is that documentation isn't always
 kept up to date.  So I thought I'd ask the experts.

 Here is a copy of my MANIFEST.  Is there anything missing?

 Build.PL
 Changes
 MANIFEST
 README
 lib/Sub/Starter.pm
 script/substarter
 t/00-load.t
 t/01-parse_usage.t
 t/02-usage.t
 t/03-sub.t
 t/04-pod.t
 t/pod-coverage.t
 t/pod.t

Why is there both a 04-pod.t and a t/pod.t? Are they the same test?

Also, you may want to model those tests after what Alias does with his
distributions (RELEASE_TESTING and AUTOMATED_TESTING) unless you have
already done so. Module::Install is said to have a feature implemented
(at some point in the future) which will help write author tests like
pod-coverage, the Test::MinimumVersion test (to detect the minimum
perl version required to run your code), etc.

You may also wish to supply a Makefile.PL (via Module::Build::Compat)
to support older perls/CPAN.pm's. There is also a list of criteria on
CPANTS (http://cpants.perl.org)

I would definitely suggest replicating Alias' test code, see:
http://cpansearch.perl.org/src/ADAMK/ORDB-CPANMeta-0.10/t/

In particular, the 9*.t series of tests are important/useful.


 --
 Just my 0.0002 million dollars worth,
  Shawn

 Programming is as much about organization and communication
 as it is about coding.

 My favourite four-letter word is Done!



Re: Getting ready for CPAN

2009-08-02 Thread Jonathan Yu
just the pm files and tarball

On 8/2/09, Shawn H. Corey shawnhco...@gmail.com wrote:
 Jonathan Yu wrote:
 On Sun, Aug 2, 2009 at 9:49 AM, Shawn H. Coreyshawnhco...@gmail.com
 wrote:
 Here is a copy of my MANIFEST.  Is there anything missing?

 Build.PL
 Changes
 MANIFEST
 README
 lib/Sub/Starter.pm
 script/substarter
 t/00-load.t
 t/01-parse_usage.t
 t/02-usage.t
 t/03-sub.t
 t/04-pod.t
 t/pod-coverage.t
 t/pod.t

 Why is there both a 04-pod.t and a t/pod.t? Are they the same test?

 No.  t/00-load.t t/pod-coverage.t and t/pod.t all came from
 module-starter aka Module::Starter

 I shall rename my tests so that they are more descriptive.


 Also, you may want to model those tests after what Alias does with his
 distributions (RELEASE_TESTING and AUTOMATED_TESTING) unless you have
 already done so. Module::Install is said to have a feature implemented
 (at some point in the future) which will help write author tests like
 pod-coverage, the Test::MinimumVersion test (to detect the minimum
 perl version required to run your code), etc.

 You may also wish to supply a Makefile.PL (via Module::Build::Compat)
 to support older perls/CPAN.pm's. There is also a list of criteria on
 CPANTS (http://cpants.perl.org)

 I would definitely suggest replicating Alias' test code, see:
 http://cpansearch.perl.org/src/ADAMK/ORDB-CPANMeta-0.10/t/

 In particular, the 9*.t series of tests are important/useful.

 Sorry, I don't use un-documented code.  Here's why:

 On https://pause.perl.org/pause/authenquery?ACTION=pause_04about it states:

 There's only one thing you need to know as soon as possible:

  Please, make sure the filename you choose contains a version number.

 OK, does that mean every file needs a version number or just the tarball?



 --
 Just my 0.0002 million dollars worth,
Shawn

 Programming is as much about organization and communication
 as it is about coding.

 My favourite four-letter word is Done!


-- 
Sent from Gmail for mobile | mobile.google.com


Test Failures - XS, does not match bootstrap parameter and version objects

2009-07-28 Thread Jonathan Yu
Hi:

I seek the wisdom of any other module authors that might have come
across this problem.

Recently, I uploaded a new version of Math::Random::ISAAC::XS and ran
into a *lot* of regressions. I've looked at the diff and I didn't
really change all that much, except for removing some things from
Recommends. The smokers nonetheless output something that I can't
reproduce, and I'm not sure if it has to do with my use of the
'version' pragma, or if the systems in question are using an older
version than I test with.

In at least one report, the version seems to be recent, so I'm not
sure if it's a new issue: version0 0.76_06

I get plenty of failing tests:
http://www.cpantesters.org/distro/M/Math-Random-ISAAC-XS.html#Math-Random-ISAAC-XS-1.0.6

Here is the diff between my last (100% PASS) version, 1.05, and the
latest version, which has a lot more failures than I'd like:
http://search.cpan.org/diff?from=Math-Random-ISAAC-XS-1.0.5to=Math-Random-ISAAC-XS-1.0.6

Any insight that the module-authors can provide would be greatly
appreciated. Does this have to do with the latest version pragma?
Maybe I should also agree that it's considered a Bad Thing and move to
using the older, more Perlish version numbers :(

Cheers,

Jonathan


Re: Test Failures - XS, does not match bootstrap parameter and version objects

2009-07-28 Thread Jonathan Yu
Jerry:

Thanks for your response.

On Tue, Jul 28, 2009 at 1:14 PM, Jerry D. Heddenje...@hedden.us wrote:
 The error message says it all:

  XS object version v1.0.6 does not match bootstrap parameter 1.0.6

 Note the 'v' in the first version statement and the lack in the second.
Well, the thing that's strange about this (as you can see from the
diff) is that I didn't change any code that has anything to do with
the version numbering stuff. It might be an issue with Module::Build,
but I can't reproduce it with the latest M::B, latest version.pm, etc.

 This is just one reason I don't use anything but single decimal point
 versions (e.g., 1.23), and never use version objects.  In fact, I even
 make sure my versions don't end in 0 either - i.e., I go from 1.09 to
 1.11.
Yeah, I think that's what I'm going to end up doing. I really liked my
version numbering scheme though. Also, some of my modules are packaged
in Debian so I don't want to just change the scheme now, or it'll
require a new epoch :(

 I know there's a right way to probably do all this, but it always
 seems there a catch somewhere with older perls, CPAN or something else.
 Therefore, I just avoid all the headaches and hassles with the above
 scheme.

 On Tue, Jul 28, 2009 at 12:37 PM, Jonathan Yujonathan.i...@gmail.com wrote:
 Hi:

 I seek the wisdom of any other module authors that might have come
 across this problem.

 Recently, I uploaded a new version of Math::Random::ISAAC::XS and ran
 into a *lot* of regressions. I've looked at the diff and I didn't
 really change all that much, except for removing some things from
 Recommends. The smokers nonetheless output something that I can't
 reproduce, and I'm not sure if it has to do with my use of the
 'version' pragma, or if the systems in question are using an older
 version than I test with.

 In at least one report, the version seems to be recent, so I'm not
 sure if it's a new issue:     version            0     0.76_06

 I get plenty of failing tests:
 http://www.cpantesters.org/distro/M/Math-Random-ISAAC-XS.html#Math-Random-ISAAC-XS-1.0.6

 Here is the diff between my last (100% PASS) version, 1.05, and the
 latest version, which has a lot more failures than I'd like:
 http://search.cpan.org/diff?from=Math-Random-ISAAC-XS-1.0.5to=Math-Random-ISAAC-XS-1.0.6

 Any insight that the module-authors can provide would be greatly
 appreciated. Does this have to do with the latest version pragma?
 Maybe I should also agree that it's considered a Bad Thing and move to
 using the older, more Perlish version numbers :(



Re: Test Failures - XS, does not match bootstrap parameter and version objects

2009-07-28 Thread Jonathan Yu
Martin:

On Tue, Jul 28, 2009 at 1:57 PM, Martin J.
Evansmartin.ev...@easysoft.com wrote:
 Jonathan Yu wrote:
 Hi:

 I seek the wisdom of any other module authors that might have come
 across this problem.

 Recently, I uploaded a new version of Math::Random::ISAAC::XS and ran
 into a *lot* of regressions. I've looked at the diff and I didn't
 really change all that much, except for removing some things from
 Recommends. The smokers nonetheless output something that I can't
 reproduce, and I'm not sure if it has to do with my use of the
 'version' pragma, or if the systems in question are using an older
 version than I test with.

 In at least one report, the version seems to be recent, so I'm not
 sure if it's a new issue:     version            0     0.76_06

 I get plenty of failing tests:
 http://www.cpantesters.org/distro/M/Math-Random-ISAAC-XS.html#Math-Random-ISAAC-XS-1.0.6

 Here is the diff between my last (100% PASS) version, 1.05, and the
 latest version, which has a lot more failures than I'd like:
 http://search.cpan.org/diff?from=Math-Random-ISAAC-XS-1.0.5to=Math-Random-ISAAC-XS-1.0.6

 Any insight that the module-authors can provide would be greatly
 appreciated. Does this have to do with the latest version pragma?
 Maybe I should also agree that it's considered a Bad Thing and move to
 using the older, more Perlish version numbers :(

 Cheers,

 Jonathan



 I think a new version of version.pm was made available a few days ago by
 David Golden (0.77). You fails results seem to mention if (0.77).
 Perhaps this is the cause. I've cc'ed David just in case (apologies if
 I'm off the mark on this David).

On further analysis I don't think version 0.77 is the culprit -- I've
installed that version inside a clean chroot and everything still
builds per normal. I'm not sure if it was something related to that
version somehow though.

I suppose part of the issue is that CPAN Smokers don't provide a full
build log, but only a report of the test part, so that might be
hampering the investigation.

And of course, until I can figure out how to reproduce the bug, I can't fix it

 Martin



Re: Getopt::Auto improvements

2009-06-29 Thread Jonathan Yu
Generally I think the old wisdom is to just assume all input is some
sort of string, and to perform any validation you need manually or
using other utilities like Scalar::Util's looks_like_number method
(which is core anyway).

However, it might be convenient to have a way to verify that input is
an integer, or floating point number, or what-have you. I'm all for
adding a feature to do this, as long as it's strictly optional -- that
is, a new Getopt::Auto should not break old scripts that expect the
old behaviour.

One problem with simple type validation like this is that you're
going to be unable to catch certain things using such a simple
validation method, and it will probably be combined with other things
anyway.

For example, if you have an --option which can only have the values 1,
2, 3 -- then you can require the field to be an int, but you're still
going to have to check that it matches one of the available numbers
anyway. So in practice I think any moderately robust module will
implement their own sanity checking.

Cheers,

Jonathan

On Mon, Jun 29, 2009 at 1:39 PM, Geoffrey Leachge...@hughes.net wrote:
 I'd appreciate some advice here.

 Getopt::Auto was conceived by Simon Cozens. I've recently adopted it.

 The idea with Getopt::Auto is that it scans your POD looking for =heads
 or =item that have the format: 'foo - what this does is bar', the
 single word followed by a dash being the critical parts, and constructs
 an option-recognition scheme for foo.

 All this is well and good, but it seems to me that there's a flaw,
 which is that there's no way to say that foo takes a string, or int, or
 ...

 So that needs to be specified, and while 'foo - ' is probably
 acceptable to the POD writer, 'foo int - ...' might be less so, taking
 into account that all of that appears in your POD's paragraph headings.

 Or am I wrong? Perhaps there's a better way?

 Thanks.







Re: Module::Build + documenation woes

2009-06-12 Thread Jonathan Yu
I have *never* used h2xs to start a module, though I have copied the
basic scripts/structure/etc that other packages have (which likely
began with h2xs at one point or another)...

I prefer doing it on my own by hand, though I guess that's not an
option for people new to Perl packaging

On Fri, Jun 12, 2009 at 6:36 AM, Paul Johnsonp...@pjcj.net wrote:
 On Thu, Jun 11, 2009 at 04:12:32PM -0700, Bill Ward wrote:

 If you really want people to adopt Module::Build, change the h2xs script so
 it generates something like the above... (kidding - does anyone still use
 h2xs to start a non-xs module?)

        Always begin with h2xs.
        Always begin with h2xs!
        ALWAYS BEGIN WITH H2XS!

 --
 Paul Johnson - p...@pjcj.net
 http://www.pjcj.net



Re: Process for Removing Qt Module from CPAN

2009-05-26 Thread Jonathan Yu
This may solve both of your dilemmas:

http://search.cpan.org/~ovid/aliased-0.22/lib/aliased.pm

:-)

We packaged that for Debian not too long ago.

On Tue, May 26, 2009 at 11:38 AM, Eric Wilhelm enoba...@gmail.com wrote:
 Hi Chris,

 # from Chris Burel
 # on Tuesday 26 May 2009:

One thing I thought of doing was calling the module Qt4, but that would
populate the Qt namespace.

 That is going to cause you headaches with the PAUSE indexer.  It finds
 all of your package statements and will flag your dist as unauthorized
 if you use an existing package.

 But actually, this sort of thing is rather clunky and unusual for Perl
 code:

  use Qt4;
  my $app = Qt4::Application(\...@argv);

 That is, if you're worried about the repetitive typing, I think the ::
 is a bigger source of tedium.

 If Application() is going to be a constructor function -- as opposed to
 the more typical new() -- it may as well be exported as QApplication().

 And really, what's wrong with Qt4::Application-new()?

 --Eric
 --
 It works better if you plug it in!
 --Sattinger's Law
 ---
    http://scratchcomputing.com
 ---



Re: Process for Removing Qt Module from CPAN

2009-05-26 Thread Jonathan Yu
Chris:

I'm not sure if that's the most desirable behaviour, as it differs
from the rest of the Perl world... Also, one useful thing is that if
you want to create an object of something in Perl you could do:

my $foo = Foo::Bar-new();
my $bar = $foo-new();

Which would create a $bar of the same type as $foo. You lose this by
dropping the -new part.

I'm sure there's other very good reasons as to why those sorts of
constructors are a bad idea.

Jonathan

On Tue, May 26, 2009 at 2:20 PM, Chris Burel chrisbu...@gmail.com wrote:
 No,
 Check out this document from Germain Garand wrote for PerlQt3:
 http://web.mit.edu/perlqt_v3.009/www/index.html#anatomy_of_perlqt

 Syntax elements summary :
   1. All Qt classes are accessed through the prefix Qt::, which
 replaces the initial Q of Qt classes. When browsing the Qt
 documentation, you simply need to change the name of classes so that
 QFoo reads Qt::Foo.
   2. An object is created by calling the constructor of the class. It
 has the same name as the class itself.
      You don't need to say new Qt::Foo or Qt::Foo-new() as most Perl
 programmers would have expected.
      Instead, you just say :
       my $object = Qt::classname(arg_1, ..., arg_n);
      If you don't need to pass any argument to the constructor, simply say :
       my $object = Qt::classname;

 On Tue, May 26, 2009 at 11:13 AM, Jonathan Yu jonathan.i...@gmail.com wrote:
 On Tue, May 26, 2009 at 2:08 PM, Chris Burel chrisbu...@gmail.com wrote:
 It's currently neither.  Right now it looks like this:
 use Qt;
 my $app = Qt::Application(\...@argv);
 my $hello = Qt::PushButton(Hello world!);
 I'm guessing you meant to say Qt::PushButton-new(...) :-)
 $hello-show();
 etc.
 Which I realize is a problem.

 On Tue, May 26, 2009 at 11:00 AM, Jonathan Yu jonathan.i...@gmail.com 
 wrote:
 Chris:

 Is it Qt4::Application or QApplication (as it was in Qt - ie version 1?)

 On Tue, May 26, 2009 at 1:59 PM, Chris Burel chrisbu...@gmail.com wrote:
 And really, what's wrong with Qt4::Application-new()?

 I've been modeling the Qt4 bindings off the Qt3 ones that Ashley and
 Germain wrote.  And that's how it works in 3, so I kept it.







Process for Removing a Module from CPAN

2009-05-25 Thread Jonathan Yu
Hi:

I'd like to remove the Qt module from CPAN, or be able to take it over.

I'm working with someone else on perlqt4 bindings for my Google Summer
of Code project, and the currently available version of Qt is from
1997 and of little use to anybody. (See: code.google.com/p/perlqt4)

We'd like to be able to push the module through CPAN once we're done.
In order to do that, we need to remove the existing version and
register the namespace to one of us.

I think the top-level namespace is appropriate since that's exactly
what other packages (ie, Wx) do.

So, what's the process for having a module removed from CPAN or at
least having the namespace transferred to someone else, so that we can
create a new package and have that one installed via the CPAN shell?

Cheers,

Jonathan


Re: Help needed testing security of login module

2009-05-21 Thread Jonathan Yu
It's my understanding that the margin by which storing a hashed
password without a salt is better is related to its length. It's
harder to calculate/store SHA-512 hashes versus SHA-1, right? I mean,
takes a lot more time  space to construct rainbow tables, and thus
could be infeasible to generate.

On the other hand, criminals and governments that wish to crack data
would potentially have access to lots of resources, like lots of disk
space and processing power, so that point is moot.

Interesting idea though, using Google to reverse hashes... in that
case you wouldn't even need to know the algorithm used to hash it!

On Thu, May 21, 2009 at 5:44 AM, Aaron Crane p...@aaroncrane.co.uk wrote:
 Bill Ward writes:
 I didn't think that a salt was necessary with a one-way hash.

 Google makes even the best hash functions reversible for some inputs:

 http://www.google.com/search?q=5d41402abc4b2a76b9719d911017c592
 http://www.google.com/search?q=aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
 http://www.google.com/search?q=2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
 http://www.google.com/search?q=9b71d224bd62f3785d96d46ad3ea3d73319bfbc2890caadae2dff72519673ca72323c3d99ba5c11d7c7acc6e14b8c5da0c4663475c2e5c3adef46f73bcdec043

 Storing a hashed password without a salt is only marginally better
 than storing a cleartext password.

 --
 Aaron Crane ** http://aaroncrane.co.uk/



Re: Help needed testing security of login module

2009-05-20 Thread Jonathan Yu
Hi:

Well, some things I can think of are:

1. Use SHA-256 instead of MD5. Even SHA-1 is thought to be possibly
weak, and there have been collisions detected against MD5, worse for
MD's predecessors like MD4. If not SHA, then there are lots of other
great algorithms like WHIRLPOOL that are worth looking at. By and
large, though, I think support/speed/testing for SHA-256 is in good
balance, making it a good choice. If you're really paranoid and want
to future-proof your software, then SHA-512 is good too. If you have
it so that algorithms for saving passwords can be changed by loading a
different module, that would be a useful feature too.

2. Make sure to have a salt value, as it prevents the use of rainbow
tables to get a password. So you have the hash and a known salt kept
separately (the salt is plaintext), and when you check the password
you check: sha256(passphrase + salt) == sha256(passphrase_entered +
salt)

I think there are some modules that do this sort of thing
transparently using mod_perl's authen hook, which means it can be used
to provide login using WWW-Basic-Authentication (though that one is a
bit insecure, even if you use the MD5-digest form).

All in all, it feels to me like you're reinventing the wheel here.
CPAN can be a great resource for these tools.

Cheers,

Jonathan

On Wed, May 20, 2009 at 5:24 PM, Bill Ward b...@wards.net wrote:
 Over the years I've developed my own private Perl web login module.  It
 takes a username or email address and password, checks it against the
 database, and creates the cookies.  It has a 'forgot my password' option
 which is reasonably secure (of course it assumes that the email address of
 record is secure, but that's unavoidable).  It uses MD5 to store passwords
 so there's no plaintext option, and I think it's secure enough for most
 Web apps.  I wrote the initial code many years ago and have been tweaking it
 and adapting it but never released it as its own module, which I'd like to
 finally get around to doing.

 But I'm afraid I may have missed a spot security-wise and would like
 someone who's a little more of an expert in that area to see if they can
 find any holes in its design or implementation that would be unacceptable.

 Any takers?



Re: Help needed testing security of login module

2009-05-20 Thread Jonathan Yu
Bill:

To clarify why a salt is necessary, consider the classic time-space
tradeoff. Let's say I know that your password is exactly 8 characters
long and I know all of the possible characters it could be. So let's
say it's alphanumeric (a-z, A-Z, 0-9, hyphen, period, underscore) -
that's 26+26+10+3 = 65 possible combinations per character.

Then you'd only have to generate a hash 65^8 = 318644812890625 times,
which for faster computers these days shouldn't take too long. Still,
it takes a lot of time, so you can store it all in a database (ie,
Rainbow Table). So if you map a bunch of arbitrary plaintexts and
calculate their hash, you can look up the hash and figure out what
text was used to generate that hash. Thus, you've either figured out
the password or an MD5 collision thereof; in either case, you'll be
able to log in.

There are web sites that specialize in that sort of thing. So having a
2-byte salt can really help stop those attacks, or at least make the
amount of space needed infeasible (since every different 2 character
salt will require you to generate an entirely different rainbow
table).

For most uses it's probably unnecessary, however, if you can harden
security with just a few extra lines of code, why not?

Cheers,

Jonathan

On Wed, May 20, 2009 at 5:45 PM, Bill Ward b...@wards.net wrote:


 On Wed, May 20, 2009 at 2:39 PM, Jonathan Yu jonathan.i...@gmail.com
 wrote:

 Hi:

 Well, some things I can think of are:

 1. Use SHA-256 instead of MD5. Even SHA-1 is thought to be possibly
 weak, and there have been collisions detected against MD5, worse for
 MD's predecessors like MD4. If not SHA, then there are lots of other
 great algorithms like WHIRLPOOL that are worth looking at. By and
 large, though, I think support/speed/testing for SHA-256 is in good
 balance, making it a good choice. If you're really paranoid and want
 to future-proof your software, then SHA-512 is good too. If you have
 it so that algorithms for saving passwords can be changed by loading a
 different module, that would be a useful feature too.


 Maybe I can make it so that the user can specify whatever cryptodigest they
 like, since opinions vary a lot on this.  MD5 is nice because everyone has
 it, and although there are chinks in its armor as you mention it's still
 pretty widely used and respected.  I don't want to force people to adopt a
 new module if I can avoid it, but it would be good to support something like
 SHA-256 or 512 for those who are more concerned about security.

 2. Make sure to have a salt value, as it prevents the use of rainbow
 tables to get a password. So you have the hash and a known salt kept
 separately (the salt is plaintext), and when you check the password
 you check: sha256(passphrase + salt) == sha256(passphrase_entered +
 salt)

 I'm not doing that, but that wouldn't be hard to add.  I didn't think that a
 salt was necessary with a one-way hash.


 I think there are some modules that do this sort of thing
 transparently using mod_perl's authen hook, which means it can be used
 to provide login using WWW-Basic-Authentication (though that one is a
 bit insecure, even if you use the MD5-digest form).


 Well, some of the apps I have using this are running as CGI rather than
 mod_perl, so right there that rules that one out.  Also, mine is not
 specifically tied to the web; it could be used for other kinds of apps as
 long as there was a suitable translation for the concept of a cookie.

 All in all, it feels to me like you're reinventing the wheel here.
 CPAN can be a great resource for these tools.

 Well, I never found anything on CPAN that did quite the same thing mine
 does.  And I wrote it originally about ten years ago when there was no such
 thing on CPAN for sure.



Re: Help needed testing security of login module

2009-05-20 Thread Jonathan Yu
A few minor points.

On Wed, May 20, 2009 at 6:00 PM, Arthur Corliss
corl...@digitalmages.com wrote:
 On Wed, 20 May 2009, Bill Ward wrote:

 2. Make sure to have a salt value, as it prevents the use of rainbow

 tables to get a password. So you have the hash and a known salt kept
 separately (the salt is plaintext), and when you check the password
 you check: sha256(passphrase + salt) == sha256(passphrase_entered +
 salt)

 I'm not doing that, but that wouldn't be hard to add.  I didn't think that
 a
 salt was necessary with a one-way hash.

 Salts are a way of combating the use of rainbow tables, which is a database
 of precomputed values within certain bounds.  Makes brute force attacks
 virtually painless, becuase now it's just a lookup.  But don't add a static
 salt, that's almost as pointless as not using one at all.  If you're going
 to use salts make sure you generate a new one every time, preferrably
 pulling a few bytes from /dev/u?random or similar.
Not totally pointless, of course, because it would still require
regenerating a rainbow table versus downloading one of them already
available. On the other hand, depending how popular your application
gets, this can be dangerous -- take for example Microsoft's Lan
Manager Hash algorithm, LMHash. Even though it is a specialized
algorithm, it became popular enough to make it feasible/useful to
create and distribute rainbow tables for. So your point is valid in
that case, and it never hurts security nor is it a big deal on
performance.

And /dev/random can be slow, so urandom is a better suggestion, or
even better, using /dev/random to seed a random number generator
algorithm like the Mersenne Twister (which is essentially what
/dev/urandom does)

 If you're really paranoid you'll also do key strengthening, similar to what
 most system authentication does.  Hash with a salt, then hash the result
 with the salt, repeat a few thousand times.

        --Arthur Corliss
          Live Free or Die



Re: Help needed testing security of login module

2009-05-20 Thread Jonathan Yu
On Wed, May 20, 2009 at 6:13 PM, Arthur Corliss
corl...@digitalmages.com wrote:
 On Wed, 20 May 2009, Jonathan Yu wrote:

 Not totally pointless, of course, because it would still require
 regenerating a rainbow table versus downloading one of them already
 available. On the other hand, depending how popular your application
 gets, this can be dangerous -- take for example Microsoft's Lan
 Manager Hash algorithm, LMHash. Even though it is a specialized
 algorithm, it became popular enough to make it feasible/useful to
 create and distribute rainbow tables for. So your point is valid in
 that case, and it never hurts security nor is it a big deal on
 performance.

 I would suggest that the benefit of a static salt is marginal in best since
 many of these hash algorithms aren't exactly computationally intensive on
 today's hardware.  If you have a guy trying to crack passwords from a shadow
 file he's only got to generate one table for all of them, versus a table per
 account.  It's an order of magnitudes more difficult in that regard,
 especially if you expand the scope to all users of an application every
 where.
That's a pretty valid point. If it's a simple auth system as I
understand it, though, then the users don't have different
permissions, so there's really no point in cracking *all* of the
passwords if you can download all the data with one.

Still, good point. Thanks for that :-)

 And /dev/random can be slow, so urandom is a better suggestion, or
 even better, using /dev/random to seed a random number generator
 algorithm like the Mersenne Twister (which is essentially what
 /dev/urandom does)

 Which was why included urandom as a suggestion.
Indeed, I was just clarifying for the benefit of others reading this message.

        --Arthur Corliss
          Live Free or Die



Re: Dual-Build Modules (What to do if both Makefile.PL and Build.PL exist)

2009-05-16 Thread Jonathan Yu
Hi all:

I just wanted to take a few minutes to thank everyone for their
discussion, particularly Michael Schwern (maintainer of
ExtUtils::MakeMaker among others) and Adam Kennedy (maintainer of
Module::Install among others).

I know this question comes up time and time again, and I had
personally advocated for preferring Build.PL over Makefile.PL, if both
are included in a distribution. I'd like to see a note in
Module::Build or Module::Build::Compat perldocs discussing this, but I
suppose their mailing list/bug reporting mechanism is best for that.

To that end, I have managed to convince the Debian Perl Packagers team
that it is beneficial to switch. However, anecdotal evidence from
upstream suggests that preferring Build.PL over Makefile.PL may break
the build process for our existing packages (some of which have had
patches made against Makefile.PL, I suppose).

What is left is testing the 202+ packages in our control that have
both a Makefile.PL and Build.PL to make sure nothing breaks. If we're
successful in our testing then I don't see any reason why our build
process won't be switched over, particularly given the advice of
everyone here.

Thanks again,

Jonathan Yu
On behalf of the pkg-perl team

On Fri, May 8, 2009 at 4:24 PM, Michael G Schwern schw...@pobox.com wrote:
 Adam Kennedy wrote:
 2009/5/6 Jonathan Yu jonathan.i...@gmail.com:
 The real question at hand here is: for modules that provide both a
 Makefile.PL and Build.PL, which should be preferred? More than that,
 from the perspective of CPAN authors, is it even useful to provide
 both? Now that Module::Build is a core module, maybe only a Build.PL
 should be included.

 When both Makefile.PL and Build.PL exist, you should ALWAYS run the
 Module::Build installation process ( perl Build.PL; perl Build; perl
 Build test; and so on...) and ignore Makefile.PL.

 I concur.  If you can run the Build.PL, run it.

 The Makefile.PL exists for compatibility only.  It is not perfect, never was
 intended to be and never will be.  It may contain only a subset of the logic
 in the Build.PL.  It may not fully emulate a real Makefile.PL.

 You can consider this definitive.


 --
 You know what the chain of command is? It's the chain I go get and beat you
 with 'til you understand who's in ruttin' command here.
        -- Jayne Cobb, Firefly



Re: Module::Build + automatic README / LICENSE

2009-05-12 Thread Jonathan Yu
Hi:

On Tue, May 12, 2009 at 7:40 AM, David Cantrell da...@cantrell.org.uk wrote:
 On Mon, May 11, 2009 at 08:07:36PM +0100, Paul LeoNerd Evans wrote:

 I was recently pointed in the direction of my kwalitee tests:
   http://cpants.perl.org/author/PEVANS
 They all fail for not having a README file or a LICENSE.

 After a few months, I came to the conclusion that the CPANTS game isn't
 worth bothering with.  My time is better spent writing code, improving
 my test coverage, improving my portability, and drinking beer than it is
 with silly things like that.

While I agree that you shouldn't take those metrics *too* seriously, I
do think that they have a place and that they are useful in helping
ensure overall software quality in practise (and not just kwalitee as
promised).

And actually looking at your ratings
(http://cpants.perl.org/author/DCANTRELL) it looks like you're
generating META.yml using an old Module::Build or by hand. In either
case, you're missing a license reference in META.yml, which might not
matter to you, but it does save some time for those interested in
programmatically determining what license terms you set.

Obviously nothing to bother with new releases about, but maybe
something to consider for future bug fixes.

I think a lot of the metrics (that are not the experimental ones) are
covered by good package maintainers anyway. Certainly, the fact that
your packages pass the majority of tests despite your thinking that
Kwalitee is of little importance, is a good sign. It means that these
are often best practices that module maintainers are using anyway, and
helps enforce them particularly for newer developers.

It's a convenient tool. Not the be-all and end-all of course, but
useful nonetheless. Possibly a lot of the things mentioned by the
Kwalitee reporter are wishlist issues and not real bugs, but y'know,
if you have some free time and you're bored... :-)

 I was wondering, since I have e.g. the following Build.PL:

    module_name = 'IO::Async',
    ...
    license = 'perl',

 surely Module::Build ought to be able to automatically satisfy both
 these conditions?

  * Take the module given by module_name (or some other named one), run it
    through pod2txt README

 I don't think that's what README is for.  There's no sense in
 duplicating the documentation.  README is where you put the instructions
 on how to build and install your software.

I have to say I agree with this. I myself just do pod2txt like that.
Maybe installation stuff should be in a separate file, ie,
INSTALLATION. Generally my READMEs are (admittedly) just a way to
satisfy the needs of CPANTS.

  * Look up the named license from a standard set of default ones and
    print it to LICENSE
 Are either of those doable automatically?

 That could be a reasonable thing to do, but even then you still have
 problems with how to represent dual-licenced code.
Indeed. Personally I just copy what I mean to the file itself. I think
most of us will have one or two preferred licenses (I'm thinking of
GPL and MIT/BSD) for all(-ish) our code, so it's not a huge problem to
do so manually.

One thing LICENSE files are helpful for is to provide clarity for
downstream people packaging modules, ie, Debian and Fedora. All too
often, module authors completely forget things like a Copyright year
(in most boilerplate license reference texts) and to specify which
version specifically.

 --
 David Cantrell | A machine for turning tea into grumpiness

          All praise the Sun God
          For He is a Fun God
          Ra Ra Ra!



Dual-Build Modules (What to do if both Makefile.PL and Build.PL exist)

2009-05-05 Thread Jonathan Yu
Hi wise Perl authors:

I've been building some Perl packages for Debian. I've noticed in the
course of this that dh-make-perl (our preferred script for
transforming Perl distributions into Debian packages) prefers
Makefile.PL over Build.PL.

One problem this has caused is that a Makefile is created which is not
removed when 'perl Build clean' is run. Now, Makefile.PL via
Module::Build::Compat actually runs Build.PL the first time, so the
Makefile expects 'Build' to already exist. The next time the module is
built, 'make' is run, which in turn triggers 'perl Build', but this no
longer works since Build.PL has not been run yet (so there is no
Build).

The real question at hand here is: for modules that provide both a
Makefile.PL and Build.PL, which should be preferred? More than that,
from the perspective of CPAN authors, is it even useful to provide
both? Now that Module::Build is a core module, maybe only a Build.PL
should be included.

Add to this some complication from Module::Install, which also uses
Makefile.PL. So in that case maybe Makefile.PL is preferred (for
Module::Install to do its thing) rather than Build.PL. (On the other
hand, I don't think I've seen modules that mix both M::I and M::B, so
in the wild this will probably not be a problem)

What does everyone else think?

I look forward to reading what other authors have to say about this.

Cheers,

Jonathan


Re: Naming advice for retry manager module

2009-04-21 Thread Jonathan Yu
Hi:

What about http://search.cpan.org/~dlo/Proc-BackOff-0.02/lib/Proc/BackOff.pm

Proc::BackOff. It seems to implement a function similar to TCP packet
retry backoff...

The idea is that for every failure you wait X time before the next
request; the next time, you wait 2X. etc. But there is also an
exponential backoff one.

Hope this helps. I haven't read the module description thoroughly but
it deals with doing retries in a way that doesn't totally hammer a
system and bring it to its knees.

Cheers,

Jonathan

On Tue, Apr 21, 2009 at 4:11 PM, Bill Ward b...@wards.net wrote:
 I am planning to write a new module that would manage retries.  Let's say
 you want to talk to some network service that might have errors or be
 offline, and if you get certain kinds of errors (e.g. the host is being
 rebooted, so it's not responding, but will shortly) you want to try again
 after some set interval.  But you don't want to retry forever - eventually
 it should be a hard error.

 We already have this kind of logic embedded in one place but I want to write
 a generic object that would basically hold the retry parameters
 (RETRY_COUNT, RETRY_DIE, RETRY_SLEEP, RETRY_SLEEP_MULTIPLIER,
 RETRY_SLEEP_LIMIT) and respond to queries like:

 - Something failed - should I retry or die? (if number of retries so far is
 less than RETRY_COUNT)
 - How long should I sleep for / wake me up when the sleep time has passed
 - etc.

 I haven't seen anything on CPAN that does this - a quick search for retry
 on CPAN yields tons of results but they all appear to be very
 domain-specific or just a mention in the documentation of some particular
 module.

 Something like Object::Retry maybe?  Then things can inherit from it?



Re: On 'unpack()' - How much did I eat?

2009-04-21 Thread Jonathan Yu
If a Perl patch can be made available for this purpose, then why not
an XS/C module?

On Tue, Apr 21, 2009 at 8:17 PM, Peter Pentchev r...@ringlet.net wrote:
 On Tue, Apr 21, 2009 at 10:24:10AM +0100, Paul LeoNerd Evans wrote:
 I find lately I've been doing a lot of binary protocol work, taking
 messages that live in TCP streams or files or similar, and doing lots of
 pack()/unpack() on them.
 [snip]

 Is there some neater way to do this? Can I either:

  a: Get unpack() to consume bytes from the LVALUE

  b: Ask unpack() how much it ate in a generic fashion?

 Brief answer:
 - it's possible by patching the Perl source, see the last paragraph
  for an explanation about a possible patch;
 - it could be done as an external module that must be kept in sync
  with Perl's pp_pack.c.

 From a quick look at pp_pack.c in both Perl 5.8.9 and Perl 5.10.0,
 both options are possible in theory, but both of them require
 modifying the Perl source in some way :(

 The least intrusive way IMHO would be to un-staticize the unpack_rec()
 routine, so either another part of core Perl or a module that
 promises to stay in sync with pp_pack's unpackstr() routine actually
 *can* make use of unpack_rec's last argument.

 Of course, it is still possible for an external module to duplicate
 the whole of pp_pack.c and take great pains to stay in sync with core,
 but I'm not sure anyone would actually *want* to do that :  Although,
 actually, I hereby volunteer to do it - to try to make a separate Perl
 module out of a copy of 5.10.0's pp_pack.c, and then follow Perl's
 development and update it as needed - that is, if there's anyone who
 actually thinks this would be a good idea and if there's no-one who
 thinks it would be a very bad idea.   I can see why it could be a bad
 idea, but here's a call for opinions / votes / whatever :)  If people
 want it, I can try.

 An easier (from a programmer's point of view) and nightmarish
 (from a module writer's point of view) solution would be a patch to
 Perl that adds another function (say, lunpack()) to pp_pack.c, that
 does pretty much the same as the current unpack(), but also return
 (or store somewhere) the number of bytes consumed.  It would only
 be useful if it is actually accepted into the core Perl and makes it
 into a release that you can require.  I think I could write a patch
 like that tomorrow after I've actually had some sleep :)

 G'luck,
 Peter

 --
 Peter Pentchev  r...@ringlet.net    r...@space.bg    r...@freebsd.org
 PGP key:        http://people.FreeBSD.org/~roam/roam.key.asc
 Key fingerprint FDBA FD79 C26F 3C51 C95E  DF9E ED18 B68D 1619 4553
 I am the meaning of this sentence.



Re: a lot of silliness about Module::Build

2009-04-10 Thread Jonathan Yu
Hi Burak:

2009/4/10 Burak Gürsoy burakgur...@gmx.net:
 -Original Message-
 From: Paul LeoNerd Evans [mailto:leon...@leonerd.org.uk]
 Sent: Friday, April 10, 2009 6:17 PM
 To: Ovid
 Cc: module-authors@perl.org
 Subject: Re: a lot of silliness about Module::Build

 I find this too. Of all my modules, any of them that don't have XS code
 in them simply provide a dual Build.PL / Makefile.PL as written by
 M::B's create_makefile_pl = 'traditional'  setting.

 I'm bundling a normal Makefile.PL for now but I think that I'll eventually
 use that option. But I always use M::B to build distributions and never
 duplicate that part in Makefile.PL. It's only there for compatibility
 nothing else.

 It only becomes even vaguely complicated on a few of my XS ones, where
 M::B expects to find lib/Foo/Bar.xs whereas EU::MM wants only Bar.xs

 This random inconsistency annoys me - IMHO M::B's behaviour here is
 much more preferable, for reasons of being able to find the code, of
 making the file unique in case I want more than one,...

 I've experienced the same issue in one of my modules, but that's easy to fix
 by adding this:

    xs_files = { 'Bar.xs' = 'lib/Foo/Bar.xs' },

 then you can use a regular Makefile.PL too.

When developing an XS module, I came to the same problem. But then I
just gave up on MakeMaker, and instead made it a passthrough for M::B

I chose to do this so that I could put all my C code into its own src/
folder, and install from there.





I think EUMM is okay to support for simple Perl-only modules, but
things like recommends in Module::Build make it really attractive to
me as a CPAN developer. So I use simple EUMM modules where possible,
but otherwise if I need to have a slightly more complex build, I
switch to Build.PL

I think we should work to slowly phase out EUMM, since supporting it
is undoubtedly a nightmare (writing Makefiles and calling those?!).
M::B lets it all be done from Perl itself, and seems to be reaching a
point where it's mature/stable enough for real widespread use.

Plus being able to subclass it is tres cool.

Just my two cents.

Cheers,

Jonathan
(PAUSE: FREQUENCY)


Re: Module name suggestions? - automatic per-OS subclass

2009-03-25 Thread Jonathan Yu
Hi Paul:

This sounds like a great idea. However, I would recommend that when
you do write your module, you include some way of determining which
version is currently in use.

For example, it's difficult to detect which version of File::Spec is
in use, because it's set up so that it does @ISA =
('File::Spec::Unix'); Other classes, including the Win32 one, subclass
the Unix class to get its basic functionality too, so doing:
File::Spec-isa('File::Spec::Unix') doesn't work properly.

For reasons related to module naming, I wouldn't simply do:
File::Spec::$^O. Consider the case where Linux and OS/2 both use the
same class, Unix (as happens with File::Spec). Then simply appending
$^O will mean you need to copy the class for both 'linux' and 'OS2'.

File::Spec handles it internally by using a hash mapping to the
different modules where appropriate. But this might not work in
general (as your class is designed for), since the behaviours of
different operating systems are different. File::Spec does it because
filehandling on OS2 and Linux are similar, but other behaviours may be
different.

So, I guess this is just my long-winded way of saying that you should
include a method of mapping operating systems to class names, but in a
portable manner, so that you can later determine which one is
currently in use. I ran into this problem with one of my modules that
used File::Spec under Win32, where my program was expecting Unix-style
path names. So I needed a way to detect if File::Spec was running
under Unix. If not, then the module was to convert the path stuff to
Unix-like, which obviously isn't necessary if we're already running
under Unix. (This might not be possible under my current understanding
of how your module is supposed to work, since it appears to be as
zero-configuration as possible. Hopefully there is a way to pass
parameters before the call to your module's subclass-finding magic to
tell it about such aliases, since there are lots of $^O strings, that
might not be strictly necessary)

But, getting to what your module should be named, no good ideas really
come to mind. But if you're going to leave out the Magic class names
(personally I don't see anything wrong with it, and think
Class::OSMagic might be appropriate)... Then my next choice would be
Class::ForSystem, since it's not too long and conveys what the module
is designed to do.

Doing stuff like this can get pretty complicated, so I'm glad somebody
is working on something for it :-). Good work

Cheers,

Jonathan
(PAUSE: FREQUENCY)

On Tue, Mar 24, 2009 at 6:30 AM, Paul LeoNerd Evans
leon...@leonerd.org.uk wrote:
 I find a number of times, that I've wanted to make some code that
 certain OSes would have better implementations of than others, or just
 plain implement in a different way.

 There's some existing examples of code that does this; File::Spec comes
 to mind. It chooses a specific subclass at runtime, based on the OS
 choice ($^O), or falling back on a list of decreasing specificness if
 that isn't found. Another example here would be a Linux-specific
 subclass of File::Tail that can use Inotify rather than polling for
 stat(). I've had further thoughts on things like /proc-scraping, network
 interface information, etc...

 What all these have in common is that $^O is used as a key to build a
 list of possible module names, and the code then tries to load and use
 the first one in the list it finds. Perhaps in cases like File::Tail
 where the OS-specific class is simply more optimal, this could be an
 optional installation later on.

 I'd like to write a small module for automatically doing this sort of
 magic; I'm just in short of a name for it. I feel it ought to be
 Class-y; something like

   Class::OSMagic
   Class::OSSpecific
   Class::MagicOS
   Class::SystemSpecific
   Class::ForSystem

 As a brief example of code, consider the following hack on File::Tail.

 Two-line change to File/Tail.pm:

  use Class::OSMagic;

  # Rename constructor
  sub __new
  {
     ...
  }

 The 'use' line would automatically import a 'sub new' into the caller,
 which just does something like

  sub new
  {
     Class::OSMagic::find_best_subclass( shift )-__new( @_ );
  }

 The user of File::Tail need not know the difference; the normal

  my $tailer = File::Tail-new( $path );

 constructor is unaffected.

 If one day someone writes an Inotify-aware Linux extension, all they
 have to do is provide

  sub File::Tail::linux;  # note lowercase, to match $^O

  use Linux::Inotify;

  sub __new
  {
     ...
  }

 Now, any code that tries to use File::Tail objects on a Linux machine
 will automatically find this subclass if it is installed.


 There's a suggestion that the word 'magic' should be avoided - other
 ideas on the name?

 --
 Paul LeoNerd Evans

 leon...@leonerd.org.uk     |    CPAN ID: PEVANS

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iD8DBQFJyLYsvLS2TC8cBo0RAig4AJ9AvIGIeEi9KQPUMMZ2hiBxKJJjCwCfXl3O
 

Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Hi all:

I'm working on a module that will be released into the Public Domain.
It contains some code that is, itself, in the public domain by another
author.

There has been a lot of discussion on the implications of Public
Domain software in places that do not have the notion of Public
Domain, particularly on the Debian list about copyright law in
Germany.

What I did to get around this is provide a clause in the module like so:

# All rights to this package are hereby disclaimed and its contents released
# into the public domain by the author. Where this is not possible, you may
# use this file under the same terms as Perl itself.

So it's released into the public domain, but also the Perl license
(Artistic + GPL) to get around this problem. I have chosen thus far to
reflect this in the Build.PL as: license = 'unrestricted' (ie,
unrestricted distribution).

What I am wondering is- is this the most appropriate license clause to
use?  Should I link to the Perl licensing terms for the META.yml (ie,
http://dev.perl.org/licenses/) OR should I keep my current link of a
paper studying public domain software
(http://edwardsamuels.com/copyright/beyond/articles/public.html)

Thanks in advance for your guidance :-)

Cheers,

Jonathan Yu
(PAUSE: FREQUENCY)


Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Scott:

Posting this back to the list. Hope you don't mind; I think the others
on this list could benefit from your reply.

On Thu, Mar 19, 2009 at 12:01 PM, Scott Elcomb pse...@gmail.com wrote:
 On Thu, Mar 19, 2009 at 11:46 AM, Jonathan Yu jonathan.i...@gmail.com wrote:
 Hi all:

 I'm working on a module that will be released into the Public Domain.
 It contains some code that is, itself, in the public domain by another
 author.

 There has been a lot of discussion on the implications of Public
 Domain software in places that do not have the notion of Public
 Domain, particularly on the Debian list about copyright law in
 Germany.

 What I did to get around this is provide a clause in the module like so:

 # All rights to this package are hereby disclaimed and its contents released
 # into the public domain by the author. Where this is not possible, you may
 # use this file under the same terms as Perl itself.

 So it's released into the public domain, but also the Perl license
 (Artistic + GPL) to get around this problem. I have chosen thus far to
 reflect this in the Build.PL as: license = 'unrestricted' (ie,
 unrestricted distribution).

 Hi Jonathan,

 I can't provide any suggestions to this, but it seems (to me anyway)
 that releasing under the Public Domain would automatically preclude
 any other licensing terms.  Is that not the heart (if not point) of
 Public Domain?

The idea is that the module is public domain (do whatever you want
with it, no terms, no copyright), however, in certain jurisdictions
that don't allow public domain, or where an author chooses to do so,
the module may be used under the terms of Artistic/GPL.

Basically this is no different from *just* public domain in countries
that support the idea of it, since, if something is public domain, you
can do whatever you want with it, including putting it under a
different license. That's my understanding of public domain, anyway,
but I'm not a lawyer.

 Anyway, I'm not sure if it's of any value to you however the folks
 behind the Creative Commons licenses recently released a new Public
 Domain Certification called CC0 (CC-Zero).  You can find more
 information here: http://creativecommons.org/about/cc0

I will look into this, but the problem with CC licenses is that they
are not Perl-approved - that is, they do not have fields in
Module::Build's license field, though public domain isn't really a
license, it's an explicit statement that there need be no licensing
because there is no ownership of copyright.

The implications of this are particularly important for Debian
packaging. By saying that the code is licensed under (one of) Public
Domain (no license), Perl Artistic or GNU GPL, it gets around the
restrictions of jurisdictions that do not allow authors to place their
work into the public domain.

The Perl license itself is either Perl Artistic or GNU GPL. The GPL is
incompatible with the Perl Artistic license, but nonetheless, because
of the or clause, Perl's licensing is not contradictory. You can
pick one or the other. If someone decides to fork the Perl code into
something else, they can license it as EITHER Perl Artistic OR GPL, or
continue to license it as both.

 I look forward to reading the paper you linked-to in your original post.

 Take care and thanks for the though provoking question!
 - Scott.

 --
  Scott Elcomb
  http://www.psema4.com/



Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Hi Shlomi,

I've looked into the CC0 license that Scott mentioned, and it looks promising.

I wonder if it is legally permissible to provide use of the code under
several licenses, ie:

1. GPL (should it be GPL 2+ only?)
2. Artistic 2.0+
3. Public Domain
4. CC0
5. MIT
6. BSD

Basically I want this code to be as free as possible, and I don't much
care what people do with it.

Dominique's reference to Wikipedia's Public Domain text might be
useful, too. Is it easier to do that?

And this all still leaves the question, what do I do for META.yml's
license field, and Build.PL's license part?

Jonathan

On Thu, Mar 19, 2009 at 12:13 PM, Shlomi Fish shlo...@iglu.org.il wrote:
 Hi Jonathan!

 On Thursday 19 March 2009 17:46:39 Jonathan Yu wrote:
 Hi all:

 I'm working on a module that will be released into the Public Domain.
 It contains some code that is, itself, in the public domain by another
 author.

 There has been a lot of discussion on the implications of Public
 Domain software in places that do not have the notion of Public
 Domain, particularly on the Debian list about copyright law in
 Germany.

 What I did to get around this is provide a clause in the module like so:

 # All rights to this package are hereby disclaimed and its contents
 released # into the public domain by the author. Where this is not
 possible, you may # use this file under the same terms as Perl itself.


 Well, if you're keen on being faithful to the public-domain nature of the
 code, you may wish to instead say Where this is not possible, you may use
 this file under the terms of the MIT X11 Licence (
 http://www.opensource.org/licenses/mit-license.php ), which is the closes
 licence you can get to PD. (Except for the http://en.wikipedia.org/wiki/WTFPL
 , but it's kinda a joke).

 I had written about why saying This program can be used under the same terms
 as Perl itself is problematic here:
 http://use.perl.org/~Shlomi+Fish/journal/36050 (also read the comments). If
 you still want to licence it under the same terms as Perl, make sure you also
 include the 2.0 version (and later - very important) of the Artistic
 Licence, which is all of GPL-compatible, allows use by proprietary software,
 and phrased much more sanely and less ambiguously than the original Artistic
 licence. The default Same terms as Perl includes only the GPLv2 and above
 and only the original Artistic Licence - neither of which are very useful.

 Regards,

        Shlomi Fish


 So it's released into the public domain, but also the Perl license
 (Artistic + GPL) to get around this problem. I have chosen thus far to
 reflect this in the Build.PL as: license = 'unrestricted' (ie,
 unrestricted distribution).

 What I am wondering is- is this the most appropriate license clause to
 use?  Should I link to the Perl licensing terms for the META.yml (ie,
 http://dev.perl.org/licenses/) OR should I keep my current link of a
 paper studying public domain software
 (http://edwardsamuels.com/copyright/beyond/articles/public.html)

 Thanks in advance for your guidance :-)

 Cheers,

 Jonathan Yu
 (PAUSE: FREQUENCY)

 --
 -
 Shlomi Fish       http://www.shlomifish.org/
 Understand what Open Source is - http://xrl.us/bjn82

 God gave us two eyes and ten fingers so we will type five times as much as we
 read.




Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Good point David.

Anyway, for the rest of the list, I've come up with the following
text; hopefully it is legally possible:

Copyleft (C) 2009 by Jonathan Yu freque...@cpan.org. All rights reversed.

ABSTRACT

I, the copyright holder of this package, hereby release the entire contents
therein into the public domain. This applies worldwide, to the extent that
it is permissible by law.

In case this is not legally possible, I grant any entity the right to use
this work for any purpose, without any conditions, unless such conditions
are required by law.

RATIONALE

As the author / copyright holder / intellectual property owner, I want this
codebase to be as free (both as in freedom and free beer) as possible in
your legal jurisdiction. This software and the code contained herein is (to
the best of my knowledge) completely unencumbered by patents, copyright,
licensing restrictions, etc.

Some legal departments of commercial entities may be uncomfortable with the
using software without first obtaining a license from the author directly.
If this is the case, then I encourage a representative of your legal
department to contact me directly to discuss a token fee being paid in
exchange for a license to unrestricted use of the code.

DISCLAIMER OF WARRANTIES

The software is provided AS IS, without warranty of any kind, express or
implied, including but not limited to the warranties of merchantability,
fitness for a particular purpose and noninfringement. In no event shall the
authors or copyright holders be liable for any claim, damages or other
liability, whether in an action of contract, tort or otherwise, arising from,
out of or in connection with the software or the use or other dealings in
the software.

LICENSING

If you are legally required to do so, then you may use this file under, at
your option:

1. The MIT/X11 License; or,
2. The BSD License; or,
3. The Perl Artistic License, version 2.0 or later; or,
4. The GNU General Public License, version 2.0 or later; or,
5. The Creative Commons CC0 (CC-Zero) License, version 1.0 or later

Note that points (3) and (4) roughly coincide with the distribution terms of
Perl itself; so if you have considered and accepted the licensing restrictions
posed by Perl 5.10, you have accepted suitable terms to use this software.

The full texts of all these licenses follow.

--

I'm not sure how well this would actually hold up in court, because I
don't think many distributions are multi-licensed to this extent :-)

I just want to make sure anyone that wants to, can use my software if
they so choose, totally unencumbered by legal restrictions.

On Thu, Mar 19, 2009 at 1:31 PM, David Cantrell da...@cantrell.org.uk wrote:
 On Thu, Mar 19, 2009 at 12:09:50PM -0400, Jonathan Yu wrote:
 On Thu, Mar 19, 2009 at 12:01 PM, Scott Elcomb pse...@gmail.com wrote:
  Anyway, I'm not sure if it's of any value to you however the folks
  behind the Creative Commons licenses recently released a new Public
  Domain Certification called CC0 (CC-Zero).  You can find more
  information here: http://creativecommons.org/about/cc0
 I will look into this, but the problem with CC licenses is that they
 are not Perl-approved - that is, they do not have fields in
 Module::Build's license field

 You mean not Module::Build-approved.  Module::Build is obviously buggy
 in this area.

 --
 David Cantrell | Minister for Arbitrary Justice

  Cynical is a word used by the naive to describe the experienced.
      George Hills, in uknot



Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Shlomi:

My reasoning for including all the other ones is that companies' legal
teams may have decided that they can use GPL-licensed code, but  have
not investigated the other licenses. This way, they'd be able to say
that the original copyright holder provided a provision licensing the
code under the GPL, so they're already covered by their prior
research.

What are your thoughts there?

I thought publishing with license = 'unrestricted' would be most
appropriate, since I don't want people to think that they are bound by
the restrictions of the MIT License, when in reality, they are not.
They can do whatever they want with it, it's public domain :-)

I briefly skimmed over your article on the Perl Monks site (I think?)
and it was my understanding that Artistic 2.0+ is the preferred
license, which is considered compatible with GPL?

Your idea is probably safer, though.

Jonathan

On Thu, Mar 19, 2009 at 2:30 PM, Shlomi Fish shlo...@iglu.org.il wrote:
 On Thursday 19 March 2009 18:23:23 Jonathan Yu wrote:
 Hi Shlomi,

 I've looked into the CC0 license that Scott mentioned, and it looks
 promising.

 I wonder if it is legally permissible to provide use of the code under
 several licenses, ie:

 1. GPL (should it be GPL 2+ only?)
 2. Artistic 2.0+

 That should probably be Artistic 1.0 and then Artistic 2.0+. But see
 below.

 3. Public Domain
 4. CC0
 5. MIT
 6. BSD

 Basically I want this code to be as free as possible, and I don't much
 care what people do with it.


 You can license your code under as many licenses as you please. For example,
 jQuery ( http://jquery.com/ ) is dually licensed MITL and GPL, and cURL (
 http://en.wikipedia.org/wiki/CURL ) used to be dually licensed MITL and MPL (=
 Mozilla Public Licence). However, if you already decided to license under the
 MIT/X11 Licence it is completely unnecessary to license it under any other
 licence (except perhaps the Public Domain) because the MITL specifically
 allows sub-licensing. Sub-licensing allows anyone to take your MITL work and
 convert their copy into code of a different licence, free or non-free.

 So my suggestion is to licence your code under Public Domain, CC0 and
 MIT, and avoid the rest of the options, which will only be confusing.

 Dominique's reference to Wikipedia's Public Domain text might be
 useful, too. Is it easier to do that?

 And this all still leaves the question, what do I do for META.yml's
 license field, and Build.PL's license part?


 Just say 'mit'. It's supported by later M::B's.

 Regards,

        Shlomi Fish

 --
 -
 Shlomi Fish       http://www.shlomifish.org/
 My Aphorisms - http://www.shlomifish.org/humour.html

 God gave us two eyes and ten fingers so we will type five times as much as we
 read.




Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
David:

Interesting idea. I'll add that in as option 6. I don't want to
replace the others though in case the web site disappears at some
point in the future, making the license pretty ambiguous.

I just wonder if a package can really be licensed under 5-6 different
licenses...

Cheers,

Jonathan

On Thu, Mar 19, 2009 at 2:27 PM, David Golden da...@hyperbolic.net wrote:
 On Thu, Mar 19, 2009 at 2:03 PM, Jonathan Yu jonathan.i...@gmail.com wrote:
 If you are legally required to do so, then you may use this file under, at
 your option:

 1. The MIT/X11 License; or,
 2. The BSD License; or,
 3. The Perl Artistic License, version 2.0 or later; or,
 4. The GNU General Public License, version 2.0 or later; or,
 5. The Creative Commons CC0 (CC-Zero) License, version 1.0 or later

 If you're going this way, you could even go so far as to say any OSI
 approved license or the CC0 is allowed.  And reference this URL:
 http://www.opensource.org/licenses/category

 -- David



Re: Public Domain - License Text?

2009-03-19 Thread Jonathan Yu
Eric-

On Thu, Mar 19, 2009 at 2:48 PM, Eric Wilhelm enoba...@gmail.com wrote:
 # from David Golden
 # on Thursday 19 March 2009 11:32:

 I will look into this, but the problem with CC licenses is that
 they are not Perl-approved - that is, they do not have fields in
 Module::Build's license field

 You mean not Module::Build-approved.  Module::Build is obviously
 buggy in this area.
... I think that the latest Module::Build relies upon Software::License

 Indeed, it does.

, so someone could probably send some CC licenses to
RJBS and lobby him to include them.  Then they would be
Perl-approved -- at least to the extent they are encoded into the
toolchain.

 As far as rjbs and I know, Module::Build is the only build tool using
 Software::License.  Module::Install seems to have its own shortlist,
 and ExtUtils::MakeMaker says you should see Module::Build::API ;-)  (I
 guess that would mean it copied and pasted a list from there at one
 point -- but I'll let someone else look for an answer there.)

 Note though that you don't really need toolchain support for the
 license -- just support in your authoring tool (because this is all
 static data by the time it gets to the target machine.)

The problem is that the META-spec is based on the licenses in
Module::Build - so you can't just put anything you want in META.yml,
or your META file will no longer match the specification.

 --Eric
 --
 Don't worry about what anybody else is going to do. The best way to
 predict the future is to invent it.
 --Alan Kay
 ---
    http://scratchcomputing.com
 ---



Re: Net::Vimeo::API - namespace question

2009-03-10 Thread Jonathan Yu
Chris:

I'm not sure of the best answer here, though I do know that top-level
namespaces (like Vimeo::API) are generally frowned upon.

A cursory search of API turns up:

- WWW::Facebook::API
- WWW::Bebo::API

As the first two results. On the other hand, some people have used the
namespace WebService:: for this sort of thing. Personally I'd go for
WWW::Vimeo.

Oh, as an aside, I think the general recommendation is to instantiate
objects as: WWW::Vimeo-new rather than the new keyword. I think this
has to do with making the call less ambiguous, since you are making it
obvious that you are calling a Class method rather than a subroutine
with a bareword as a parameter. There's probably something about that
in Perl Best Practices :-)

Hope this helps.

Cheers,

Jonathan

On Tue, Mar 10, 2009 at 9:57 AM, Chris Vertonghen c...@dimolto.com wrote:
 Hello all,

 I'm writing a module that I dubbed Net::Vimeo::API for now. It's
 rather self-explanatory, but just to make sure: it's supposed to be a
 perl module for interfacing with the full Vimeo API (www.vimeo.com). I
 am thinking about publishing it on CPAN.

 It roughly goes something like this:

 use Net::Vimeo::API;

 my $api = new Net::Vimeo::API({'key'    = 'your_api_key',
                               'secret' = 'your_shared_secret'});

 my $response = $api-execute_method('vimeo.test.echo', {
  'foo' = 'bar',
  'baz' = 'quux',
 });

 Now, I am writing to the list to ask about the namespace. Is it ok if
 I name it Net::Vimeo::API or would it be more appropriate to name it
 something else like Vimeo::API or WWW::Vimeo::API?

 Your input is much appreciated.

 Best,
 Chris.

 --
 Chris Vertonghen
 http://friendfeed.com/cvertonghen



Re: Perl Critic and (honest) hash references

2009-03-03 Thread Jonathan Yu
This is addressed to the participants of this conversation in general:

I think getting to discussions of O(1) or whatever is a bit much for a
language like Perl. It's not designed for speed, though speed is
certainly nice to have. It's also got a fantastic (well, it's got a
bit of a learning curve) interface to achieve really blisteringly fast
code - XS, and any number of the foreign function interfaces people
have been contributing to CPAN over the years.

Choosing array-based parameterization instead of hashes seems to be a
bad idea to me, because you could potentially end up with lots of
cases of sparse arrays. And while it's nice to have typo detection, it
does limit your extensibility, as was conceded earlier.

So, if you really need these types of things - like strict typing or
checking of each hash field name to ensure you're doing the right
thing - perhaps you are using the wrong language. Then again, I can
definitely see the usefulness of a module like Tie::Hash::Autovivify
when trying to track down bugs, but personally I wouldn't want the
added overhead of that sort of checking on each hash call (especially
since TIEd interfaces are known to be slow, or at least widely
believed thus).

@David, I had always thought that if you do something like:

if ($ENV{TEST_HASH_KEYS}) {
  require Test::Hash::Autovivify;
  tie %hash ...
}
   else {
  %hash = ();
   } # for accomplishing some sort of end-user debug mode

Then everything should just work, and you wouldn't be losing
performance. You could also just use the Autovivify module (if it's
available) as part of your functionality testing, which would be in
your test .t files if you are distributing on CPAN (and potentially
also if you are not).

Anyway, my point is: to each their own, and profiling is more
important than Big Oh notation. This is just one of the many things I
have been upset with the treatment of in my Computer Science
program--it's way too academic, and not applied enough, but I suppose
that's University in general.

When it comes down to it, everything in software or really, most
fields, is that it comes to some sort of trade-off. You've got the
classic time-space tradeoff, and then the programmer time vs
application runtime tradeoff, and so on and so forth. Perl has always
seemed to me a language where you can quickly hack things together to
get the job done, and refine the code if and when it becomes necessary
to do so. To that end, there are some nice modules to test code
coverage, profiling, memory leaks, etc.

Cheers,

Jonathan Yu

On Tue, Mar 3, 2009 at 7:41 AM, David Cantrell da...@cantrell.org.uk wrote:
 On Mon, Mar 02, 2009 at 08:22:24PM +, Nicholas Clark wrote:

 Hash lookup should be O(1), independent of number of keys. Of course, a hash
 with more keys uses more memory, but so does an array with more elements.

 But that's a bigger value of 1 from that required for an array lookup.

 --
 David Cantrell | Enforcer, South London Linguistic Massive

  When one has bathed in Christ there is no need to bathe a second time
      -- St. Jerome, on why washing is a vile pagan practice
         in a letter to Heliodorus, 373 or 374 AD



Re: Perl Critic and (honest) hash references

2009-03-03 Thread Jonathan Yu
Jonathan:

On Tue, Mar 3, 2009 at 1:16 PM, Jonathan Rockway j...@jrock.us wrote:
 * On Tue, Mar 03 2009, Jonathan Yu wrote:
 Choosing array-based parameterization instead of hashes seems to be a
 bad idea to me, because you could potentially end up with lots of
 cases of sparse arrays.

 ...
I just don't see anything wrong with hashes for passing around
parameters, and performance really isn't an issue to me in Perl
programs as much as my ability to quickly write them. If I need things
blisteringly fast, I can write it in C and inline it. :-)

 Personally I wouldn't want the added overhead of that sort of checking
 on each hash call (especially since TIEd interfaces are known to be
 slow, or at least widely believed thus).

 ...

 Anyway, my point is: to each their own, and profiling is more
 important than Big Oh notation.

 The impression I get from your post is that big-O notation upsets you,
 and you say to measure instead.  OK.  But instead of analyzing
 algorithms or doing measurements, you just make stuff up.  Do you really
 think that speculation is better than mathematical reasoning?

I'm not saying Big-O notation doesn't have its uses, but I'm more an
Engineer than a Computer Science student -- I'm very pragmatic. I do
what works, and if there's a problem, I'll go back and try to fix it
by using different algorithms or something. And that's why profiling
is important.

There are many people smarter than me that do things like figure out
big O time of algorithms, and write the generic algorithms that
everyone uses. It doesn't take knowledge of the internals of Red-Black
trees to be able to benefit from them, but it is important to
understand the algorithm from a conceptual level, so as to know its
advantages as well as its limitations.

There are other issues to optimization than Big O time - namely, cache
affinity. Again, people smarter than me are tackling this problem,
with the Judy array, for example.

Programming/computer science/software engineering, to me, is just
about solving problems using computers. Like any project, you'll have
to focus your time on the constraints that are important to you -
usability, speed, efficiency, memory use, etc. And the argument seems
to be that programmer time costs more than CPU time and memory, so it
makes more sense at first to spend more time creating and less time
thinking about it all in gory depth.

On the other hand, reuse of existing algorithms is what makes it
possible for people to do things rather quickly and efficiently
without having a total understanding of the guts of things.

 This is just one of the many things I have been upset with the
 treatment of in my Computer Science program--it's way too academic,
 and not applied enough, but I suppose that's University in general.

 Well, sort of.  Most CS programs don't cover anything academic either.
 This is why you end up with reimplementations of bubble sort and parsers
 built from hackish regular expressions.  I think we can all agree that
 that kind of lack of understanding makes software hard to maintain (and
 it makes it perform poorly too).

 (Oh, and don't get me started on the widely-held belief that relational
 databases are built from magic pixie dust rather than simple data
 structures.  That one really brings out the wackos.)

 Regards,
 Jonathan Rockway

 --
 print just = another = perl = hacker = if $,=$



Re: Delivery Status Notification (Failure)

2009-02-14 Thread Jonathan Yu
Hi:

What about the other modules in See Also of that module?

HTML::Sanitizer, HTML::Scrubber, HTML::StripScripts, HTML::Parser

Cheers,

Jonathan

On Fri, Feb 13, 2009 at 5:53 PM, Bill Ward b...@wards.net wrote:
 I sent mail to the author of HTML::Detoxifier but it bounced.  Does anyone
 here have any suggestions for XSS-killers in Perl?

 -- Forwarded message --
 From: Mail Delivery Subsystem mailer-dae...@googlemail.com
 Date: Fri, Feb 13, 2009 at 2:13 PM
 Subject: Delivery Status Notification (Failure)
 To: william.w...@gmail.com


 This is an automatically generated Delivery Status Notification

 Delivery to the following recipient failed permanently:

 pwal...@metajournal.net

 Technical details of permanent failure:
 Google tried to deliver your message, but it was rejected by the recipient
 domain. We recommend contacting the other email provider for further
 information about the cause of this error. The error that the other server
 returned was: 554 554 pwal...@metajournal.net: Relay access denied (state
 14).

   - Original message -

 MIME-Version: 1.0
 Sender: william.w...@gmail.com
 Received: by 10.223.105.208 with SMTP id u16mr423166fao.14.1234563201184;
 Fri,
13 Feb 2009 14:13:21 -0800 (PST)
 Date: Fri, 13 Feb 2009 14:13:20 -0800
 X-Google-Sender-Auth: 64c746ed7ffc96c9
 Message-ID: 3d2fe1780902131413s2d0c1a62y1f43df84c4d3e...@mail.gmail.com
 Subject: HTML::Detoxifier
 From: Bill Ward b...@wards.net
 To: Patrick Walton pwal...@metajournal.net
 Content-Type: multipart/alternative; boundary=001636c599f30f35870462d42521

 --001636c599f30f35870462d42521
 Content-Type: text/plain; charset=ISO-8859-1
 Content-Transfer-Encoding: 7bit

 I noticed you have posted HTML::Detoxifier on CPAN, but it's version 0.01
 only, and hasn't been updated since 2004.  What's the current status?  I
 have need of something that does what this purports to do, but I'm dubious
 about the fact that it hasn't been updated.

 --001636c599f30f35870462d42521

   - Message truncated -





DBD::SQLite Module Abandoned?

2009-02-13 Thread Jonathan Yu
Hi all:

I write because one of the modules I use in one of my distributions,
DBD::SQLite, seems to be abandoned by its maintainer. I am wondering
what should be done in such a case, because the backlog of requests in
the RT is getting pretty big, and there are some outstanding patches
in the downstream Debian package.

I have tried emailing the author; the message did not bounce, but I
have had no response from him yet, and I sent that message a week or
so ago.

So, I ask you all, what should be done about this? Is there indeed
anything I can do? Is there a way to request maintainership of a
module?

Cheers,

Jonathan
(PAUSE: frequency)


Re: DBD::SQLite Module Abandoned?

2009-02-13 Thread Jonathan Yu
Hi David:

Thanks for your reply. Conveniently, I already have access to Adam's repository.

However, it appears that the module author is Matt Sergeant - not Adam
- so I don't know if he has access to upload modules yet (or they will
be flagged as unauthorized). So hopefully he is given comaintainership
or just takes over maintainership of that module.

I really wish there were some sort of standard process drafted that
would allow CPAN module developers to take over maintainership of
another (abandoned) module. Perhaps there is, and I'm just not aware
of it yet?

Cheers,

Jonathan

On Fri, Feb 13, 2009 at 11:13 PM, David Golden da...@hyperbolic.net wrote:
 On Fri, Feb 13, 2009 at 11:04 PM, Jonathan Yu jonathan.i...@gmail.com wrote:
 Hi all:

 I write because one of the modules I use in one of my distributions,
 DBD::SQLite, seems to be abandoned by its maintainer. I am wondering
 what should be done in such a case, because the backlog of requests in
 the RT is getting pretty big, and there are some outstanding patches
 in the downstream Debian package.

 Hi, Jonathan.  DBD::SQLite is a pretty important module.  It has
 relatively recently been adopted into Adam Kennedy's open repository
 to allow a more community approach to maintaining it:

 http://svn.ali.as/cpan/trunk/DBD-SQLite/

 If you're interested in working on it, please email ad...@cpan.org
 with your PAUSE ID and ask for a commit bit.  I'm not sure who else is
 working on it, but you can probably find out from the subversion blame
 logs.

 Regards,
 David Golden



Re: Name for barcode-reading module?

2009-02-11 Thread Jonathan Yu
Hi Keith:

I have to say, I am pleased that people are working on enriching
CPAN's offerings. Currently it does lack Barcode handling support,
from what I can tell.

I would be totally behind a project that would bring everything under
a consistent Barcode:: interface - that is, the processing/input,
creation, checking, etc. of barcodes.

So, while new namespaces are generally a bad idea, I think that
Barcode processing support can't realistically exist under anything
else. So, go ahead and register the Barcode namespace from PAUSE, and
commit away.

As a side project, I think it would be beneficial to contact the
authors of the other Barcode-related modules and see if you can bring
them under the Barcode:: namespace.

Please do feel free to contact me if you have more specific questions,
and thanks for consulting module-authors :-)

Unfortunately I don't personally have a whole lot of experience
working with barcodes, but I definitely think this sort of thing is
something others will find extremely useful!

Cheers,

Jonathan
(PAUSE: frequency)

On Wed, Feb 11, 2009 at 3:10 PM, Keith Ivey ke...@iveys.org wrote:
 I've written some code to recognize and read barcodes in images that I want
 to clean up and release as a module.  I haven't found any module that
 duplicates the functionality.

 The idea is that you give it an image (JPEG, PNG, or anything else handled
 by GD) that contains a barcode and it returns the string that the barcode
 decodes to.  Currently I handle only EAN-13 (including UPC) barcodes, but I
 may eventually extend it.

 There are modules for generating barcodes under GD::Barcode, modules for
 validating the check digit in barcodes under Business::Barcode, and a
 top-level Barcode namespace that may or may not be a good idea.

 Any suggestions for the right namespace?

 --
 Keith C. Ivey ke...@iveys.org
 Washington, DC



Re: require modules for make test

2009-01-26 Thread Jonathan Yu
Hi there:

I'm not sure if this will help you, but I use build_requires (and
Module::Build) - it seems to do the trick for me. For the author
tests, I put them in 'recommended'.

It's worth noting that the team that packages Perl modules for Debian
moves all requirements to basically build_requires - that is, every
time a module is being packaged, all author tests will be run. This
ensures that appropriate reports are sent to authors, and helps track
down bugs.

Hope this helps,

Jon

On Sun, Jan 25, 2009 at 12:20 PM, Bill Moseley mose...@hank.org wrote:
 On Sat, Jan 24, 2009 at 10:27:29AM -0800, Bill Moseley wrote:
 Currently, we have a rather simple approach:

 ( eval { require Test::More } and ( $Test::More::VERSION = 0.62 ) )
 or push @fails, 'Test::More 0.62';

 Hum, after looking at that I replaced it with:

   # Let Perl use its version logic
eval use $module_needed $need_version;


 Question still stands about test_requires in Module::Install.


 Here's my full code providing full opportunity to point out stupid
 mistakes and how it can be all done in a just a few lines if I would
 have read the documentation better

 my %test_module = (
'Foo::Bar'  = 1.23,
 );

 sub MY::test {

my @missing_mods;

print [test modules]\n;

for my $test_mod ( sort keys %test_modules ) {
my $need_version = $test_modules{$test_mod};

unless ( $test_mod-require ) {
printf(
- %-40s   *missing* (need %s)\n,
$test_mod,
$need_version,
);
push @missing_mods, $test_mod;
next;
}


# Let Perl use its version logic
eval use $test_mod $need_version;## no critic
my $error = $@;## no critic

if ( $error ) {
push @missing_mods, $test_mod;
my $error = $@;
$error =~ s/ at .*$//s;
warn $error\n;
next;
}

printf(
- %-40s   loaded (%s = %s)\n,
$test_mod,
$test_mod-VERSION,
$need_version,
);
} ## end for my $test_mod ( sort...

if ( @missing_mods ) {

my $fail_string = 'sorry, cannot run tests without ' . join( ' and ', 
 @missing_mods );

return EOF;
 test::
\...@echo $fail_string; exit 1
 EOF
}

 }




 --
 Bill Moseley
 mose...@hank.org
 Sent from my iMutt




Re: Module Proposal: Video::FourCC

2009-01-21 Thread Jonathan Yu
Just an update to all on what I ended up doing. The
Video::FourCC::Info module is now uploaded to CPAN and is slowly
propagating across the mirrors.

I ended up using SQLite as a data backend. I dropped it into the
lib/Video/FourCC directory, so that it is in the same place as
Info.pm. The module finds the SQLite data via:

# Look for the data file in the same folder as this module
my $data = File::Spec-catfile(
  File::Basename::dirname(__FILE__),
  'codecs.dat'
);

Which might not be ideal and is thus subject to change in the future.
But so far it looks pretty good. I made sure Module::Build would not
ignore this file by adding this parameter:

$builder-add_build_element('dat'); # where $builder is a Module::Build object

It works on Windows (under Strawberry Perl) as well as Linux. An
outstanding bug has to do with SQLite itself (or rather, the perl
DBD::SQLite binding), because sqlite_version is used only once (in
the upstream SQLite driver). That's why there are lots of failures
right now for version 1.0. Some of them were fixed for 1.1, but I
still haven't figured out how to best fix it.

It seems DBD::SQLite isn't all that well maintained, and the bug has
been outstanding in the Request Tracker for some time now.

Cheers,

Jonathan

On Thu, Jan 15, 2009 at 12:19 AM, Ben Morrow b...@morrow.me.uk wrote:

 Quoth jonathan.i...@gmail.com (Jonathan Yu):
 All:

 I like the idea of using SQLite because it's small, runs on most
 operating systems and fast. It's also got less memory overhead, which
 is certainly a good thing.

 However, with respect to using File::ShareDir; the module finds the
 directory appropriately, but I'm at a loss as to how to configure
 Module::Build to install it there.

 I suppose I could list File::ShareDir as a build requirement and just
 force the file to be put where File::ShareDir says it should be, but
 it's definitely less elegant than Module::Install, where a subclass
 already provides functionality to put things in the Shared directory
 (I believe it's M::I::Shared)

 Any thoughts on Module::Build and sharedir files?

 If I'm reading the code right, you need to copy your shared files into
 somewhere under blib/lib/auto/Foo/Bar for distribution Foo-Bar or module
 Foo::Bar sometime during ACTION_build, and then it will get installed in
 the right place later. Of course you should be using File::Spec instead
 of specifying that path in Unix format, and by 'blib' I mean
 $self-blib, and so on...

 However, I don't use M::B, so I could be wrong :).

 Ben




Re: Document::Aggregate

2009-01-15 Thread Jonathan Yu
Hi:

To transform multiple documents to a single document, wouldn't the
easiest method be putting it in a .zip or tarball?

Anyway, I looked into doing something like this, and personally am
leaning toward XML if doing this in the future.

I think it would be best if you simply translate documents to some
sort of internal Perl object representation, with Parser modules, then
have a way to convert those Perl objects back to other documents. But
that's probably just because of my recent work with SQL::Translator,
that works by doing just that.

Good luck!

Cheers,

Jonathan

On Fri, Jan 9, 2009 at 7:36 PM, nadim khemir na...@khemir.net wrote:
 Hi, I have a little idea for a module and I'd like to get your input.

 When I run a project, be it a single perl module or full scale project with
 tens of modules and lots of documentation (user manual, requirements,
 analysis and design documents, ...) I always have a problem with structuring
 the documents together. The documents themselfs are no problems, only how
 they refere to each other and how to navigate in the structure.

 I've used wikis in the passed with various degrees of success depending on the
 setup and size of the project.

 Lately, I've been using git much more and I'm trying to have the projects self
 contained. This eliminates wikis that are too often have their own version
 control and keep things in a specific database and needs a server setup.

 I have looked at other type of wikis of wich Ikiwiki was close to what I want
 but not really the right thing.

 I believe that what is needed is just a way to structure the documents
 together and have a presentation layer. It might be useful to have the
 possibility to modify those documents through a wiki interface but that's not
 my main goal.

 My idea is to create set of module to:

 - search in the current project documents to be aggregated
 - apply a set of filters, eg: transform a DB to a txt file or multiple files
 in a single file
 - create the aggregation structure
 - render the structure in a specific format, HTML, PDF, ...

 If you have used the excellent
 http://search.cpan.org/~lyokato/Pod-ProjectDocs-0.36/bin/pod2projdocs, I want
 to do the same thing but on any document that might be relevant.

 Some of the documents will be written in wiki description language and I plan
 to support multiple languages (through the filters).

 What do you think about this idea?
 Is there already something out there?
 Would you like to join this project?
 What would you like to see in it?

 Cheers, Nadim.











Re: Module Proposal: Video::FourCC

2009-01-14 Thread Jonathan Yu
All:

I like the idea of using SQLite because it's small, runs on most
operating systems and fast. It's also got less memory overhead, which
is certainly a good thing.

However, with respect to using File::ShareDir; the module finds the
directory appropriately, but I'm at a loss as to how to configure
Module::Build to install it there.

I suppose I could list File::ShareDir as a build requirement and just
force the file to be put where File::ShareDir says it should be, but
it's definitely less elegant than Module::Install, where a subclass
already provides functionality to put things in the Shared directory
(I believe it's M::I::Shared)

Any thoughts on Module::Build and sharedir files?

Cheers!

Jonathan

On Sat, Jan 10, 2009 at 9:20 PM, Ben Morrow b...@morrow.me.uk wrote:

 Quoth da...@cantrell.org.uk (David Cantrell):
 On Sat, Jan 10, 2009 at 07:41:22PM -0500, Jonathan Yu wrote:
  On Fri, Jan 9, 2009 at 12:57 PM, nadim khemir na...@khemir.net wrote:
   3/ don not put all your data in the module. Although this module is very
   specific and unlikely to be loaded with a lot of other modules at the 
   same
   time, it is good to conserve memory. Split your data and load it
 dynamically,
   compress the data if possible.
  How would you suggest doing this? Do you think using SQLite as a
  backing store would be a good idea, or perhaps the Berkeley DB?

 If you put the data in a __DATA__ segment, it only gets loaded into
 memory when accessed.  My own experience with Number::Phone::UK::Data -
 which has a very big database in __DATA__ is that DBM::Deep is very
 frugal with memory - as the data is indexed, it is quite happy to seek
 back and forth on the DATA filehandle and only load from disk those bits
 that are needed.

 If you're going to do that (and it's a good idea), surely it would be
 better to put the data in a real file and use File::ShareDir or some
 such to find it?

 Ben




Re: Module Proposal: Video::FourCC

2009-01-10 Thread Jonathan Yu
Hi Nadim:

On Fri, Jan 9, 2009 at 12:57 PM, nadim khemir na...@khemir.net wrote:
 On Thursday 08 January 2009 21.15.07 Jonathan Yu wrote:
 Hi all:

 I am looking into writing a module that will look up information on a
 Video file's Four Character Code (FourCC) and display some useful
 stuff, like a description of what the codec actually is. This will be
 useful for the Video::Info package in particular, because it only
 extracts those four bytes from the file and does nothing further.

 I have looked at two references [1], [2] for FourCC codes that are
 commonly used. These descriptions will let people figure out
 characteristics of video files, like the encoding that was used and
 the expected quality of that encoding - for instance, if the FourCC is
 CDVC, then we know that it was encoded using the Canopus DV Codec -
 thus the file was thus created on a digital camcorder, and that's the
 quality we can expect from it.

 The Wikipedia [3] page is pretty useful for explaining what FourCC is,
 and will hopefully establish some relevancy.

 What I am looking for are the community's thoughts on such a module,
 since it would really just be a large internal hash table with FourCC
 codes mapped to descriptions (or, optionally, an SQLite database, but
 I don't think it's really large enough to warrant that - it should fit
 mostly in memory). Aside from searching for the phase FourCC using
 the CPAN search engine, I haven't really done a whole lot of
 searching, and so I don't know if there is/are [a] package(s) that
 handle this type of thing.

 1/ if it is only to provide information and no manipulation, call your module
 Video::FourCC::Info or something that makes it's intent clear. This also
 leaves the top level Video::FourCC open to other modules

This sounds like a good idea, and I have opted to call my module that
name instead.

I'm not sure what other operations need to happen for FourCC's, but
leaving the option open is a good idea.

 2/ put all the references you found in the module, speciallyu those that will
 exist in the future (difficult to guess I know) and those you feel will be
 updated

What do you mean? So far I just have those two. FourCC is pretty
nonstandard unfortunately, so we're really just in a game trying to
make sure we can keep up with whatever people decide to come up with.

 3/ don not put all your data in the module. Although this module is very
 specific and unlikely to be loaded with a lot of other modules at the same
 time, it is good to conserve memory. Split your data and load it dynamically,
 compress the data if possible.

How would you suggest doing this? Do you think using SQLite as a
backing store would be a good idea, or perhaps the Berkeley DB?

 We need more application oriented modules, I'd like very much to see video
 editing on CPAN. Maybe an idea for your future modules ;)

 Good luck with your module, Nadim.




Cheers!


Re: Devel::NoGlobalSig - croak when a global %SIG is installed

2009-01-08 Thread Jonathan Yu
Eric:

Just a thought - if there was a more general module, you could still
have it default to detecting global %SIG changes. It might be
beneficial to have similar reminders for other modules that shouldn't
be messed with, like changes to the $ENV{PATH}.

But I'm no expert in any of this. I just think you can have better
reusability while not compromising your ease of use, by setting
reasonable defaults and allowing those defaults to be overridden for
the special cases.

Cheers,

Jon

On Wed, Jan 7, 2009 at 7:06 PM, Eric Wilhelm scratchcomput...@gmail.com wrote:
 Hi all,

 Thanks for the input and suggestions on this.  I ended up with this sort
 of usage:

  perl -MDevel::NoGlobalSig=die ...

 And that's on its way to the CPAN now.

 # from Jenda Krynicky
 # on Thursday 04 September 2008 10:22:

Why not make it slightly more general?
package CarpIfForgotten;

 This might be interesting as Sub::Exploding (Acme::landmine?), but in
 this case, I think supporting more generality detracts from the
 usability -- because you would have to type something much longer to
 get the diagnostic switch flipped.

 --Eric
 --
 Left to themselves, things tend to go from bad to worse.
 --Murphy's Corollary
 ---
http://scratchcomputing.com
 ---



Module Proposal: Video::FourCC

2009-01-08 Thread Jonathan Yu
Hi all:

I am looking into writing a module that will look up information on a
Video file's Four Character Code (FourCC) and display some useful
stuff, like a description of what the codec actually is. This will be
useful for the Video::Info package in particular, because it only
extracts those four bytes from the file and does nothing further.

I have looked at two references [1], [2] for FourCC codes that are
commonly used. These descriptions will let people figure out
characteristics of video files, like the encoding that was used and
the expected quality of that encoding - for instance, if the FourCC is
CDVC, then we know that it was encoded using the Canopus DV Codec -
thus the file was thus created on a digital camcorder, and that's the
quality we can expect from it.

The Wikipedia [3] page is pretty useful for explaining what FourCC is,
and will hopefully establish some relevancy.

What I am looking for are the community's thoughts on such a module,
since it would really just be a large internal hash table with FourCC
codes mapped to descriptions (or, optionally, an SQLite database, but
I don't think it's really large enough to warrant that - it should fit
mostly in memory). Aside from searching for the phase FourCC using
the CPAN search engine, I haven't really done a whole lot of
searching, and so I don't know if there is/are [a] package(s) that
handle this type of thing.

Cheers,

Jonathan Yu

--

[1] http://www.fourcc.org/codecs.php
[2] http://msdn.microsoft.com/en-us/library/ms867195.aspx#fourcccodes
[3] http://en.wikipedia.org/wiki/FourCC