[ 
https://issues.apache.org/jira/browse/ARROW-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16056603#comment-16056603
 ] 

Wes McKinney edited comment on ARROW-1130 at 6/20/17 10:51 PM:
---------------------------------------------------------------

You should look at what libstdc++ is being loaded at runtime. Try putting the 
gcc 4.8 runtime libraries first in your LD_LIBRARY_PATH and see what happens. 
I've been burned by this quite a few times. See item 3 in 
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html; libraries are 
supposed to be *forward compatible* with NEWER {{libstdc++}}, but if you use an 
OLDER {{libstdc++}}, then you could have problems



was (Author: wesmckinn):
You should look at what libstdc++ is being loaded at runtime. Try putting the 
gcc 4.8 runtime libraries first in your LD_LIBRARY_PATH and see what happens. 
I've been burned by this quite a few times. See item 3 in 
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html; libraries are 
supposed to be *forward compatible* with NEWER libstdc{{++}}, but if you use an 
OLDER libstdc{{++}}, then you could have problems


> io-hdfs-test failure
> --------------------
>
>                 Key: ARROW-1130
>                 URL: https://issues.apache.org/jira/browse/ARROW-1130
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>         Environment: Ubuntu 16.04, GCC 4.8, Parquet-cpp
>            Reporter: Young Park
>            Priority: Blocker
>
> Hi,
> I have noticed that arrow-cpp's io-hdfs-test fails during compilation with 
> GCC 4.8, but passes when compiled with GCC 5.4 (as it just skips all tests as 
> it doesn't connect to the HDFS client).
> I went into the test output log and it seemed to want me to set the variable 
> ARROW_HDFS_TEST_USER, so I set the variable to 'root' and 
> ARROW_HDFS_TEST_PORT to '9000' (which is the port that I use to connect to my 
> local hdfs) and the test passes.
> Do I need to configure the environment and the variables in a specific way to 
> get it to work?
> I'm mainly asking as I am trying to use arrow and parquet c++ libraries in an 
> external project and I continue to run into segfaults in the libhdfs 
> jni_helper even though I successfully connect to HDFS on my local Hadoop 
> cluster and even read in a single Parquet file and am hoping that somehow 
> this will help me figure out the issue in my external project as well.
> Thank you in advance for your help.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to