[
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579892#comment-16579892
]
Allen Wittenauer commented on HDFS-13822:
-----------------------------------------
As reported by the qbt nightly runs, be aware that ctests for libhdfspp have
been broken since ~ June 29th. Likely caused by one of:
{code}
[Jun 28, 2018 5:37:22 AM] (aajisaka) HADOOP-15495. Upgrade commons-lang version
to 3.7 in
[Jun 28, 2018 5:58:40 AM] (aajisaka) HADOOP-14313. Replace/improve Hadoop's
byte[] comparator. Contributed by
[Jun 28, 2018 6:39:33 AM] (aengineer) HDDS-195. Create generic CommandWatcher
utility. Contributed by Elek,
[Jun 28, 2018 4:21:56 PM] (Bharat) HDFS-13705:The native ISA-L library loading
failure should be made
[Jun 28, 2018 4:39:49 PM] (eyang) YARN-8409. Fixed NPE in
ActiveStandbyElectorBasedElectorService.
[Jun 28, 2018 5:23:31 PM] (sunilg) YARN-8379. Improve balancing resources in
already satisfied queues by
[Jun 28, 2018 10:41:39 PM] (nanda) HDDS-185:
TestCloseContainerByPipeline#testCloseContainerViaRatis fail
[Jun 28, 2018 11:07:16 PM] (nanda) HDDS-178: DN should update transactionId on
block delete. Contributed by
{code}
So be sure your failures are actually related to the patch.
> speedup libhdfs++ build (enable parallel build)
> -----------------------------------------------
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Pradeep Ambati
> Priority: Minor
> Attachments: HDFS-13382.000.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails
> due to dependencies.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]