Re: Network Testing

2006-02-20 Thread David Steinbrunner
Matisse Enzer wrote: > > On Feb 17, 2006, at 7:57 AM, David Steinbrunner wrote: > >> ... that give the ability to ask for the current kb/s or the like. > > I think you'll have to roll your own, but you might get help from the > various NetPacket::* classes, such as >NetPacket::TCP Thanks f

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Matisse Enzer
On Feb 19, 2006, at 7:13 AM, Andreas J. Koenig wrote: Make sure you verify that all files in the distro are readable. Reject if the permissions are bogus. Recently we had an increasig number of distros that had absurd permissions. This reminds me - it doesn't seem like Module::Build allows on

Re: Network Testing

2006-02-20 Thread Matisse Enzer
On Feb 17, 2006, at 7:57 AM, David Steinbrunner wrote: ... that give the ability to ask for the current kb/s or the like. I think you'll have to roll your own, but you might get help from the various NetPacket::* classes, such as NetPacket::TCP -

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Tyler MacDonald
Adam Kennedy <[EMAIL PROTECTED]> wrote: > > I'd still like such a thing to be visible in some way. Of course > >you're going to happily skip tests that require a database if you don't > >have DBI_DSN set. > Not necesarily... it all depends on how important it is to you. I see > some potential

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Adam Kennedy
Now 100% skips, THAT could potentially be interesting, or maybe TODOs. But then I don't necesarily know why it would be worthy of a different result code. Is there metadata stored apart from these result codes? If so it might be useful to just store the statistics on skips. Assuming this

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Tyler MacDonald
Adam Kennedy <[EMAIL PROTECTED]> wrote: > Firstly is that it might turn an otherwise normal result into something > else, with no clear rule. It makes a judgement call that some level of > testing is good or bad, which isn't really the place of an installer to > call. > > The reason Kwalitee ha

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Sébastien Aperghis-Tramoni
Barbie wrote: > > 12. System is incompatible with the package. > > Linux::, Win32::, Mac:: modules. Irreconcilable differences. > > Not sure how you would cover this, but point 12 seems to possibly fit. > POSIX.pm is created for the platform it's installed on. A recent package > I was testing,

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Adam Kennedy
Regarding the blow, I may have been a little unclear on the layout of the points. The first line is the name of the error. Following lines are meant to provide details to help clarify what it means. Barbie wrote: On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote: 2. Incompatible p

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Yitzchak Scott-Thoennes
On Mon, Feb 20, 2006 at 11:36:27AM +, Barbie wrote: > > 12. System is incompatible with the package. > > Linux::, Win32::, Mac:: modules. Irreconcilable differences. > > Not sure how you would cover this, but point 12 seems to possibly fit. > POSIX.pm is created for the platform it's insta

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Tels
Moin, On Monday 20 February 2006 04:20, Adam Kennedy wrote: > (Andreas J. Koenig) wrote: > >> On Sun, 19 Feb 2006 22:22:20 +1100, Adam Kennedy <[EMAIL PROTECTED]> > >> said: > > > > > > 1. Broken or corrupt packaging. > > > A bad tarball, MANIFEST files missing. > > > > Make sur

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Barbie
On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote: > > 2. Incompatible packaging. > Packaging unwraps, but missing files for the testing scheme. You may want to split this into a result that contains no test suite at all (UNKNOWN) and one that has missing files according to the M