Hi pranith;

The bigtop  smoke tests are a good way to go.  You can run them against pig 
hive and so on.

In general running a simple mapreduce job like wordcount is a good first pass 
start.

Many other communities like orangefs and so on run Hadoop tests on alternative 
file systems, you can collaborate with them.

There is an hcfs wiki page you can contribute to on Hadoop.apache.org where we 
detail Hadoop interoperability 



> On Sep 2, 2016, at 3:33 PM, Pranith Kumar Karampuri <pkara...@redhat.com> 
> wrote:
> 
> hi Jay,
>       Are there any tests that are done before releasing glusterfs upstream 
> to make sure the plugin is stable? Could you let us know the process, so that 
> we can add it to 
> https://public.pad.fsfe.org/p/gluster-component-release-checklist
> 
> -- 
> Pranith
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to