[jira] [Comment Edited] (HDFS-14394) Add -std=c99 / -std=gnu99 to libhdfs compile flags

2019-04-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16808138#comment-16808138
 ] 

Eric Yang edited comment on HDFS-14394 at 4/2/19 8:43 PM:
--

[~stakiar] pedantic-errors flag forces syntax that is not defined by stand c to 
error out.  This means Hadoop unit tests have already using gnu dialect without 
realizing it.  Gcc is GPLv3 licensed, and it looks like Apache is not ok with 
GPLv3 according to this statement: 
https://apache.org/licenses/GPL-compatibility.html
This means the problem is concerning based on the latest information.  +0 from 
my side.  Others may provide more insights on how to address this matter.


was (Author: eyang):
[~stakiar] pedantic-errors flag forces syntax that is not defined by stand c to 
error out.  This means Hadoop unit tests have already using gnu dialect without 
realizing it.  Gcc is GPLv3 licensed, and it looks like Apache is ok with GPLv3 
according to this statement: https://apache.org/licenses/GPL-compatibility.html
This means the patch is probably ok base on the latest information.  +1 from my 
side.

> Add -std=c99 / -std=gnu99 to libhdfs compile flags
> --
>
> Key: HDFS-14394
> URL: https://issues.apache.org/jira/browse/HDFS-14394
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14394.001.patch
>
>
> libhdfs compilation currently does not enforce a minimum required C version. 
> As of today, the libhdfs build on Hadoop QA works, but when built on a 
> machine with an outdated gcc / cc version where C89 is the default, 
> compilation fails due to errors such as:
> {code}
> /build/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c:106:5:
>  error: ‘for’ loop initial declarations are only allowed in C99 mode
> for (int i = 0; i < numCachedClasses; i++) {
> ^
> /build/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c:106:5:
>  note: use option -std=c99 or -std=gnu99 to compile your code
> {code}
> We should add the -std=c99 / -std=gnu99 flags to libhdfs compilation so that 
> we can enforce C99 as the minimum required version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14394) Add -std=c99 / -std=gnu99 to libhdfs compile flags

2019-04-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807296#comment-16807296
 ] 

Eric Yang edited comment on HDFS-14394 at 4/2/19 12:00 AM:
---

[~stakiar] It seems a little strange that it failed the first time, then it 
worked the second time.  The last section of the maven output looks like this:
{code:java}
     [exec] 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:124:
 Failure
     [exec] Expected: (nullptr) != (clusterInfo), actual: 8-byte object <00-00 
00-00 00-00 00-00> vs NULL
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0x00783b7e, pid=28414, 
tid=0x7efc91ac38c0
     [exec] #
     [exec] # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build 
1.8.0_151-b12)
     [exec] # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode 
linux-amd64 compressed oops)
     [exec] # Problematic frame:
     [exec] nmdCreate: Builder#build error:
     [exec] RuntimeException: Although a UNIX domain socket path is configured 
as /tmp/native_mini_dfs.sock.28414.846930886, we cannot start a 
localDataXceiverServer because libhadoop cannot be 
loaded.java.lang.RuntimeException: Although a UNIX domain socket path is 
configured as /tmp/native_mini_dfs.sock.28414.846930886, we cannot start a 
localDataXceiverServer because libhadoop cannot be loaded.
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1209)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1178)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1433)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:509)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2827)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2733)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1697)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:913)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:520)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:479)
     [exec] # C  [hdfs_ext_hdfspp_test_shim_static+0x383b7e]
     [exec] #
     [exec] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid28414.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://bugreport.java.com/bugreport/crash.jsp
     [exec] # The crash happened outside the Java Virtual Machine in native 
code.
     [exec] # See problematic frame for where to report the bug.
     [exec] #
     [exec]
     [exec]
     [exec] 85% tests passed, 6 tests failed out of 40
     [exec]
     [exec] Total Test time (real) =  96.60 sec
     [exec]
     [exec] The following tests FAILED:
     [exec]   3 - test_test_libhdfs_zerocopy_hdfs_static (Failed)
     [exec]  36 - test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
(OTHER_FAULT)
     [exec]  37 - libhdfs_mini_stress_valgrind_hdfspp_test_static (Failed)
     [exec]  38 - memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
(Failed)
     [exec]  39 - test_libhdfs_mini_stress_hdfspp_test_shim_static (Failed)
     [exec]  40 - test_hdfs_ext_hdfspp_test_shim_static (OTHER_FAULT)
     [exec] Errors while running CTest{code}

It was not able to find libhadoop native library, even though I did ran with 
mvn clean install -Pnative in hadoop-common-project follow by the same compile 
command in hadoop-hdfs-native-client project.

There are some difference between [c++11 and 
c99|https://stackoverflow.com/questions/10461331/what-are-the-incompatible-differences-between-c99-and-c11].
  I don't know if this will create instability in libhdfspp to use c99.  Gcc 
4.5+ can enforce c99 standard by passing -std=c99 -pedantic-errors 
-fextended-identifiers instead of  gnu99 to prevent gnu features to be included.


was (Author: eyang):
[~stakiar] It seems a little strange that it failed the first time, then it 
worked the second time.  The last section of the maven output looks like this:
{code:java}
     [exec] 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:124:
 Failure
     [exec] Expected: (nullptr) != (clusterInfo), actual: 8-byte object <00-00 

[jira] [Comment Edited] (HDFS-14394) Add -std=c99 / -std=gnu99 to libhdfs compile flags

2019-04-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807296#comment-16807296
 ] 

Eric Yang edited comment on HDFS-14394 at 4/1/19 11:58 PM:
---

[~stakiar] It seems a little strange that it failed the first time, then it 
worked the second time.  The last section of the maven output looks like this:
{code:java}
     [exec] 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:124:
 Failure
     [exec] Expected: (nullptr) != (clusterInfo), actual: 8-byte object <00-00 
00-00 00-00 00-00> vs NULL
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0x00783b7e, pid=28414, 
tid=0x7efc91ac38c0
     [exec] #
     [exec] # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build 
1.8.0_151-b12)
     [exec] # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode 
linux-amd64 compressed oops)
     [exec] # Problematic frame:
     [exec] nmdCreate: Builder#build error:
     [exec] RuntimeException: Although a UNIX domain socket path is configured 
as /tmp/native_mini_dfs.sock.28414.846930886, we cannot start a 
localDataXceiverServer because libhadoop cannot be 
loaded.java.lang.RuntimeException: Although a UNIX domain socket path is 
configured as /tmp/native_mini_dfs.sock.28414.846930886, we cannot start a 
localDataXceiverServer because libhadoop cannot be loaded.
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1209)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1178)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1433)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:509)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2827)
     [exec] at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2733)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1697)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:913)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:520)
     [exec] at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:479)
     [exec] # C  [hdfs_ext_hdfspp_test_shim_static+0x383b7e]
     [exec] #
     [exec] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/libhdfspp/tests/hs_err_pid28414.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://bugreport.java.com/bugreport/crash.jsp
     [exec] # The crash happened outside the Java Virtual Machine in native 
code.
     [exec] # See problematic frame for where to report the bug.
     [exec] #
     [exec]
     [exec]
     [exec] 85% tests passed, 6 tests failed out of 40
     [exec]
     [exec] Total Test time (real) =  96.60 sec
     [exec]
     [exec] The following tests FAILED:
     [exec]   3 - test_test_libhdfs_zerocopy_hdfs_static (Failed)
     [exec]  36 - test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
(OTHER_FAULT)
     [exec]  37 - libhdfs_mini_stress_valgrind_hdfspp_test_static (Failed)
     [exec]  38 - memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
(Failed)
     [exec]  39 - test_libhdfs_mini_stress_hdfspp_test_shim_static (Failed)
     [exec]  40 - test_hdfs_ext_hdfspp_test_shim_static (OTHER_FAULT)
     [exec] Errors while running CTest{code}

It was not able to find libhadoop native library, even though I did ran with 
mvn clean install -Pnative in hadoop-common-project follow by the same compile 
command in hadoop-hdfs-native-client project.

There are some difference between 
[https://stackoverflow.com/questions/10461331/what-are-the-incompatible-differences-between-c99-and-c11|c++11
 and c99].  I don't know if this will create instability in libhdfspp to use 
c99.  Gcc 4.5+ can enforce c99 standard by passing -std=c99 -pedantic-errors 
-fextended-identifiers instead of  gnu99 to prevent gnu features to be included.


was (Author: eyang):
[~stakiar] It seems a little strange that it failed the first time, then it 
worked the second time.  The last section of the maven output looks like this:
{code:java}
     [exec] 
/home/eyang/test/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:124:
 Failure
     [exec] Expected: (nullptr) != (clusterInfo), actual: 8-byte object <00-00