Not sure how great a solution this is, but I thought I'd go ahead and post
it in case anyone else can benefit from it.

I ended up copying my native libraries to HDFS under
/native-libraries/<arch> where arch is either "Linux-i386-32" or
"Linux-amd64-64".  Then I used this code in my Mapper's configure() method
to copy the architecture-appropriate native libraries to the current working
directory:

String platformName = org.apache.hadoop.util.PlatformName.getPlatformName();
FileSystem hdfs = FileSystem.get(conf);
hdfs.copyToLocalFile(new Path("/native-libraries/" + platformName +
"/libFoo.so"), new Path("libFoo.so"));
hdfs.copyToLocalFile(new Path("/native-libraries/" + platformName +
"/libFoo_Native.so"), new Path("libFoo_Native.so"));

This works because the map task's working directory is already in
LD_LIBRARY_PATH.


On Fri, Jul 10, 2009 at 3:40 PM, Stuart White <[email protected]>wrote:

> By this, I assume you mean $HADOOP_HOME/lib/native/<arch>.
>
> Yes and no.  The code I'm wanting to call is a JNI wrapper around a legacy
> C shared library.  So, I have the legacy shared library (libFoo.so) and a
> java class Foo.java which contains native methods (these native methods are
> implemented in libFoo_Native.so).  Inside libFoo_Native.so, it makes
> dlopen() calls to the true legacy shared library, libFoo.so.
>
> If I place the .so's in lib/native/<arch>, libFoo_Native.so gets found
> successfully because this directory has been added to Java's search path for
> native libs (and because libFoo_Native.so is being loaded using
> System.loadLibrary()).  But, when the methods inside libFoo_Native.so call
> dlopen() on libFoo.so, this fails, because lib/native/<arch> is not in
> LD_LIBRARY_PATH.  (At least, I think that's why it's failing...)
>
> Obviously, this is overly complex, and I'm considering how to simplify
> it...
>
> Thanks.
>
>
> On Fri, Jul 10, 2009 at 3:29 PM, Hong Tang <[email protected]> wrote:
>
>> Would it work if you package your native library under the directory of
>> lib/native/<arch>/...?
>>
>>
>> On Jul 10, 2009, at 12:46 PM, Todd Lipcon wrote:
>>
>>  Hi Stuart,
>>>
>>> Hadoop itself doesn't have any nice way of dealing with this that I know
>>> of.
>>> I think your best bet is to do something like:
>>>
>>> String dataModel = System.getProperty("sun.arch.data.model");
>>> if ("32".equals(dataModel)) {
>>>  System.loadLibrary("mylib_32bit");
>>> } elseif ("64".equals(dataModel)) {
>>>  System.loadLibrary("mylib_64bit");
>>> } else {
>>>  throw new RuntimeException("Unknown data model: " +
>>> String.valueOf(dataModel));
>>> }
>>>
>>> Then include your libraries as libmylib_32bit.so and libmylib_64bit.so in
>>> the distributed cache.
>>>
>>> Hope that helps
>>> -Todd
>>>
>>> On Fri, Jul 10, 2009 at 12:19 PM, Stuart White <[email protected]
>>> >wrote:
>>>
>>>  My hadoop cluster is a combination of i386-32bit and amd64-64bit
>>>> machines.
>>>> I have some native code that I need to execute from my mapper.  I have
>>>> different native libraries for the different architectures.
>>>>
>>>> How can I accomplish this?  I've looked at using -files or
>>>> DistributedCache
>>>> to push the native libraries to the nodes, but I can't figure out how to
>>>> make sure I link against the correct native library (for the
>>>> architecture
>>>> the map task is running on).
>>>>
>>>> Anyone else run into this?  Any suggestions?
>>>>
>>>>
>>
>

Reply via email to