Not concerned with wasted time really (at least in this case). Just seems more logical to me that if early tests fail it is more of a clue to the user that something fundamental to the installation was wrong, whereas in later tests is seems more of a clue that perhaps something architecture dependant isn't working in the module, or who knows what else. Sort of a severity measure
I take it back, I suppose time is a concern: here's why... My module provides an API for distributing jobs via a DRM (Distributed Resource Manager) like SGE or Condor. A busy cluster may take a while for the job to leave to waiting queue, transfer to a running state, and complete. So I spose my tests in theory could take weeks to complete ( I guess I better code in an option to not actually distribute jobs), but even just communicating to the master node (as early tests would do), depending on how the cluster is configured and how busy the network is, could take awhile on a busy network. Sorta similiar I imagine, for a database module, what if the database is heavily loaded? I'll post my module docs to help provide context to the discussion. ----- Original Message ----- From: "Andy Lester" <[EMAIL PROTECTED]> To: "Fergal Daly" <[EMAIL PROTECTED]> Cc: "Tim Harsch" <[EMAIL PROTECTED]>; "Perl Mod Authors" <[EMAIL PROTECTED]> Sent: Friday, April 02, 2004 12:51 PM Subject: Re: running tests > > coded correctly. So it's desirable to see the results of the lower level > > tests first because running the higer level tests could be a waste of time. > > But how often does that happen? Why bother coding to optimize the > failures? > > Besides, if you have a smokebot to run the tests for you, then you don't > care how long things take. > > xoa > > -- > Andy Lester => [EMAIL PROTECTED] => www.petdance.com => AIM:petdance