Evans,
The rule of thumb there usually is to create an extra package that depends
on hadoop in this case and delivers jars under /usr/lib/hadoop/lib.

The dependency on hadoop makes sure that the directory /usr/lib/hadoop/lib
already exists because this new package is installed. So, your final users
will only have to install this new package in order to get hadoop and the
ignite dependencies in the hadoop classpath. It also allows people who
don't want ignite in their classpath to simply install hadoop not have it
in their classpath.

BTW, I admit to not knowing everything about ignite, but if MR classpath is
all you want it in, it may make sense to put the jars under
/usr/lib/hadoop-mapreduce (and depend on the hadoop-mapreduce package
instaead of the hadoop package). No need to have ignite jars in the HDFS
classpath which will be the case if you were to put them under
/usr/lib/hadoop, as far I as recall.

Mark

On Wed, May 20, 2015 at 9:42 AM, Evans Ye <[email protected]> wrote:

> Hi all,
>
> I'm just thinking about the following problem and would like to get some
> ideas from you experts.
>
> I notice that the ignite-hadoop package needs to rely on puppet recipes to
> symlik  needed jars into /usr/lib/hadoop/lib, otherwise no mapreduce job
> can be ran w/ ignite hadoop accelerator.
> However, there might be a case that users are not using bigtop puppet, or,
> sometimes users only need to quickly obtain an environment with specific
> components installed and running. In that case, installing bigtop provided
> package might be the easiest way. Take RPM packaging as an example, what do
> you think if we can do symlink at the post-install phase of RPMs so that
> the package can start working functionally right after it is installed?
>
> Thanks,
> Evans
>

Reply via email to