[
https://issues.apache.org/jira/browse/BIGTOP-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807134#comment-13807134
]
jay vyas commented on BIGTOP-769:
---------------------------------
One really simple way to do this without adding too much extra stuff to bigtop
is by using a custom test-execution pom file. we do this for big, and copy it
in at runtime.
It simply uses the gmaven hooks which come as part of maven.
<plugin>
<groupId>org.codehaus.groovy.maven</groupId>
<artifactId>gmaven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<id>check-testslist</id>
<phase>verify</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<source><![CDATA[
import org.apache.bigtop.itest.*
import org.apache.bigtop.itest.shell.*
Shell sh = new Shell();
sh.exec("pwd > /tmp/hereiampig");
sh.exec("cd ./pigtests && source ./test.sh");
pass=(sh.ret==0) ;
ret=sh.ret ;
sh.exec("echo " +sh.ret+ " > /tmp/pigtestret");
if(! pass){
throw new RuntimeException("Exit code -> ${pass} - ${ret} :::");
}
]]>
</source>
</configuration>
</execution>
</executions>
</plugin>
I was actually thinking of putting it in as a patch because bigtop smokes for
pig only run on hadoop 2.x, and you need a shell test or other custom test for
pig if you want to test before version 11, where they embedded the integration
testing into pig source code.
> Create a generic shell executor iTest driver
> --------------------------------------------
>
> Key: BIGTOP-769
> URL: https://issues.apache.org/jira/browse/BIGTOP-769
> Project: Bigtop
> Issue Type: Improvement
> Components: Tests
> Affects Versions: 0.4.0
> Reporter: Roman Shaposhnik
> Assignee: Roman Shaposhnik
> Priority: Blocker
> Fix For: backlog
>
>
> It would be nice to have a way of generically wrapping up shell-based tests
> in iTest framework.
> I imagine a pretty simple implementation (at least initially) where on the
> iTest side we'd have a parameterized testsuite that would look inside a
> specific location under resources and instantiate one test per shell script
> that it finds there (subject to include/exclude filtering constraints). Then
> the tests will be exec'ed inside of a pre-set UNIX environment one-by-one (no
> parallel execution for now). If shell returns 0 -- the test passes, non 0 --
> fails (and the stderr/stdout get captured).
> Finally, I don't have any better answer to what the contract for the
> environment should be, but I'd like folks to chime in with suggestions. We
> can probably start with populating it with ALL of the properties extracted
> from Hadoop config files (core-site.xml, hdfs-site.xml, etc.) with obvious
> transformations (fs.default.name becomes FS_DEFAULT_NAME, etc.). Or we can
> have a manifest of what's allowed and what tests can rely on.
--
This message was sent by Atlassian JIRA
(v6.1#6144)