The integration with external systems like HDFS is a complex topics and should
be generally solved at the level of the software that has no control over a
user's environment (yes, I am talking about Igite). In Bigtop we are doing a
lot of this stuff, including the guarantees that version of HDFS, Ignite has
been built against, will be in the cluster, etc.

Generally speaking, if someone rejects to use orchestration and deployment
software similar to Bigtop, finding the correct libs is their own
responsibility. I would advise not to load extra modules nor to redistribute
libs from another project, just to solve someone's inability to correctly
configure their own cluster.

Cos

On Fri, Dec 11, 2015 at 04:45PM, Valentin Kulichenko wrote:
> Igniters,
> 
> I'm looking at the question on SO [1] and I'm a bit confused.
> 
> We ship ignite-hadoop module only in Hadoop Accelerator and without Hadoop
> JARs, assuming that user will include them from the Hadoop distribution he
> uses. It seems OK for me when accelerator is plugged in to Hadoop to run
> mapreduce jobs, but I can't figure out steps required to configure HDFS as
> a secondary FS for IGFS. Which Hadoop JARs should be on classpath? Is user
> supposed to add them manually?
> 
> Can someone with more expertise in our Hadoop integration clarify this? I
> believe there is not enough documentation on this topic.
> 
> BTW, any ideas why user gets exception for JobConf class which is in
> 'mapred' package? Why map-reduce class is being used?
> 
> [1]
> http://stackoverflow.com/questions/34221355/apache-ignite-what-are-the-dependencies-of-ignitehadoopigfssecondaryfilesystem
> 
> -Val

Attachment: signature.asc
Description: Digital signature

Reply via email to