[ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12806045#action_12806045
 ] 

Stephen Watt commented on HADOOP-6332:
--------------------------------------

I've written some of the necessary implementation classes to get a rough draft 
of this framework running. At present, it appears what we have is the ability 
to define and run the tests on a specific cluster, with some basic stop/start 
and fault injection features for the cluster management. However, after passing 
in all the correct values to the ShellProcessManager constructor (the class 
that identifies the cluster you want to run your unit test on) and attempting 
to call start() on my concrete implemention of the AbstractMasterSlaveCluster, 
I get the exception described below. Is anyone else seeing this ? I get this on 
both OS/x and Linux.

Note: The directory exists and start-all works just fine.

Exception in thread "main" java.io.IOException: Cannot run program 
"start-all.sh" (in directory "/home/hadoop/hadoop-0.20.1/bin"): error=2, No 
such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
        at org.apache.hadoop.util.Shell.run(Shell.java:134)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
        at 
org.apache.hadoop.test.system.process.ShellProcessManager.execute(ShellProcessManager.java:71)
        at 
org.apache.hadoop.test.system.process.ShellProcessManager.start(ShellProcessManager.java:62)
        at 
org.apache.hadoop.test.system.AbstractMasterSlaveCluster.start(AbstractMasterSlaveCluster.java:64)
        at 
org.apache.hadoop.test.CheckClusterTest.main(CheckClusterTest.java:24)
Caused by: java.io.IOException: error=2, No such file or directory
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
        at java.lang.ProcessImpl.start(ProcessImpl.java:91)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)

> Large-scale Automated Test Framework
> ------------------------------------
>
>                 Key: HADOOP-6332
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6332
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: test
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>             Fix For: 0.21.0
>
>         Attachments: 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332-MR.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> ----
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> ----
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to