[ 
https://issues.apache.org/jira/browse/BIGTOP-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13488137#comment-13488137
 ] 

Stephen Chu commented on BIGTOP-769:
------------------------------------

I agree that populating all the Hadoop config file properties would be useful. 
Our HDFS tests also look for env variables like HADOOP_HOME, 
HADOOP_MAPRED_HOME, and HADOOP_CONF_DIR.

I like the idea of having a manifest of the basic properties that tests can 
rely on.

Outside of the basic properties, I think it's easiest if the test user is 
responsible for making sure all of a test's required environment properties are 
set. It'll be complicated if we try to make a contract that extends beyond the 
basic properties. e.g. A WebHDFS test wants to check if DFS_WEBHDFS_ENABLED is 
true, but the config files from the node the test is running on doesn't include 
this property while the NameNode does have this property set to true. In this 
case, I think the test user should be responsible for setting 
DFS_WEBHDFS_ENABLED before running the shell executor.
                
> Create a generic shell executor iTest driver
> --------------------------------------------
>
>                 Key: BIGTOP-769
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-769
>             Project: Bigtop
>          Issue Type: Improvement
>          Components: Tests
>    Affects Versions: 0.4.0
>            Reporter: Roman Shaposhnik
>            Assignee: Roman Shaposhnik
>             Fix For: 0.5.0
>
>
> It would be nice to have a way of generically wrapping up shell-based tests 
> in iTest framework.
> I imagine a pretty simple implementation (at least initially) where on the 
> iTest side we'd have a parameterized testsuite that would look inside a 
> specific location under resources and instantiate one test per shell script 
> that it finds there (subject to include/exclude filtering constraints). Then 
> the tests will be exec'ed inside of a pre-set UNIX environment one-by-one (no 
> parallel execution for now). If shell returns 0 -- the test passes, non 0 -- 
> fails (and the stderr/stdout get captured).
> Finally, I don't have any better answer to what the contract for the 
> environment should be, but I'd like folks to chime in with suggestions. We 
> can probably start with populating it with ALL of the properties extracted 
> from Hadoop config files (core-site.xml, hdfs-site.xml, etc.) with obvious 
> transformations (fs.default.name becomes FS_DEFAULT_NAME, etc.). Or we can 
> have a manifest of what's allowed and what tests can rely on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to