Hi Greg,

I appreciate your response!  Thanks for clarifying the capabilities and 
limitations of the current functional testing framework.  (BTW, did you mean 
"current" as in "current on galaxy-dist" or also "current on galaxy-central" ?) 
 If I can find the time, these are areas I would be interested in helping 
enhance.

Sorry my diction was unclear; by "output files" I meant the transient files 
that a running job is free to create and manipulate in its 
job_working_directory.  (I'm writing "transient" rather than "temporary" to 
avoid confusion with, e.g. files created with tempfile.mkstemp - although 
conceivably one might want to be able to inspect their contents on failure as 
well, actually saving them might be a very different problem.)

Normally, upon successful or failed job completion the job working directory 
and anything in it are erased.  For certain tools, it could be helpful in 
debugging if the developer (or Galaxy system administrator) were able to 
inspect their contents to see, for instance, if they are properly formed.

I can disable removal of job working directories globally, but then of course 
the job working directories also persist for successful jobs, which can add up 
to a lot of unnecessary storage (the reason they're deleted by default in the 
first place).

I'm not sure how work is divided on your team, but can you tell me (a) if the 
preceding paragraphs actually clarify anything for you, and (b) whether that 
issue is on the radar of your team and specifically on the radar of the primary 
developer(s) / maintainer(s) of the testing framework?

Thanks again for responding to my email.

Best,
Eric

________________________________
From: Greg Von Kuster [g...@bx.psu.edu]
Sent: Wednesday, November 30, 2011 9:56 AM
To: Paniagua, Eric
Cc: galaxy-dev@lists.bx.psu.edu Dev
Subject: Re: [Internal - Galaxy-dev #2159] (New) [galaxy-dev] 2 questions about 
Galaxy's functional testing support

Hello Eric,


Submitted by epani...@cshl.edu<mailto:epani...@cshl.edu>

Hi all,

I've read the Wiki page on Writing Functional Tests 
(http://wiki.g2.bx.psu.edu/Admin/Tools/Writing%20Tests) and I've been looking 
through test/base and test/functional and I am left with two questions:

  *   Is it possible to write a test to validate metadata directly on an 
(optionally composite) output dataset?

I'm sure this is possible, but it would require enhancements to the current 
functional test framework.


Everything described on the above page is file oriented.  I see that there is 
TwillTestCase.check_metadata_for_string, but as far as I can tell this is a bit 
nonspecific since it appears to just do a text search on the Edit page.

This is correct.



I don't yet fully understand the context in which tests run, but is there some 
way to access a "live" dataset's metadata directly, either as a dictionary or 
just as attributes?  Or even to get the actual dataset object?


Not with the current functional test framework.  Doing this would require 
enhancements to the framework.


  *   Does the test harness support retaining output files only for failed 
tests?  Ideally with a cap on how much output data to save.  If not, would this 
be difficult to configure?


I'm not sure what you mean by "output files" in your question.  If you mean 
output datasets that result from running a functional test for a tool, then I 
believe there is no difference if the test passed or failed.


Thanks,
Eric


Greg Von Kuster
Galaxy Development Team
g...@bx.psu.edu<mailto:g...@bx.psu.edu>




___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to