Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-10-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/570/

[Oct 25, 2017 2:40:33 PM] (aajisaka) HADOOP-14979. Upgrade 
maven-dependency-plugin to 3.0.2. Contributed by
[Oct 25, 2017 3:08:22 PM] (aw) HADOOP-14977. Xenial dockerfile needs ant in 
main build for findbugs
[Oct 25, 2017 5:54:40 PM] (manojpec) HDFS-12544. SnapshotDiff - support diff 
generation on any snapshot root
[Oct 25, 2017 9:11:30 PM] (xiao) HADOOP-14957. ReconfigurationTaskStatus is 
exposing guava Optional in
[Oct 25, 2017 9:24:22 PM] (arp) HDFS-12579. JournalNodeSyncer should use 
fromUrl field of
[Oct 25, 2017 10:07:50 PM] (subru) YARN-4827. Document configuration of 
ReservationSystem for
[Oct 25, 2017 10:51:27 PM] (subru) HADOOP-14840. Tool to estimate resource 
requirements of an application
[Oct 26, 2017 5:25:10 PM] (rkanter) YARN-7358. TestZKConfigurationStore and 
TestLeveldbConfigurationStore
[Oct 26, 2017 7:10:14 PM] (subu) YARN-5516. Add REST API for supporting 
recurring reservations. (Sean Po
[Oct 26, 2017 10:50:14 PM] (rkanter) YARN-7320. Duplicate LiteralByteStrings in
[Oct 27, 2017 12:47:32 AM] (rkanter) YARN-7262. Add a hierarchy into the 
ZKRMStateStore for delegation token
[Oct 27, 2017 2:13:58 AM] (junping_du) Update CHANGES.md and RELEASENOTES for 
2.8.2 release.
[Oct 27, 2017 2:15:35 AM] (junping_du) Set jdiff stable version to 2.8.2.
[Oct 27, 2017 2:30:48 AM] (junping_du) Add several jdiff xml files for 2.8.2 
release.
[Oct 27, 2017 3:15:19 AM] (wangda) YARN-7307. Allow client/AM update supported 
resource types via YARN
[Oct 27, 2017 9:45:03 AM] (stevel) MAPREDUCE-6977 Move logging APIs over to 
slf4j in
[Oct 27, 2017 2:43:54 PM] (arp) HDFS-9914. Fix configurable WebhDFS 
connect/read timeout. Contributed by
[Oct 27, 2017 3:23:57 PM] (sunilg) YARN-7375. Possible NPE in  RMWebapp when HA 
is enabled and the active
[Oct 27, 2017 5:16:38 PM] (rohithsharmaks) YARN-7289. Application lifetime does 
not work with FairScheduler.




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-tools/hadoop-resourceestimator 
   Dead store to jobHistory in 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.getHistoryResourceSkyline(String,
 String) At 
ResourceEstimatorService.java:org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.getHistoryResourceSkyline(String,
 String) At ResourceEstimatorService.java:[line 196] 
   Incorrect lazy initialization and update of static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.skylineStore
 in new org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() 
At ResourceEstimatorService.java:of static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.skylineStore
 in new org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() 
At ResourceEstimatorService.java:[lines 78-82] 
   Write to static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.config 
from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:[line 80] 
   Write to static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.gson from 
instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:[line 106] 
   Write to static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.logParser 
from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:[line 86] 
   Write to static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.rleType 
from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:[line 108] 
   Write to static field 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService.skylineStore
 from instance method new 
org.apache.hadoop.resourceestimator.service.ResourceEstimatorService() At 
ResourceEstimatorService.java:from 

[jira] [Created] (HDFS-12738) Namenode logs many "SSL renegotiate denied" warnings after enable HTTPS for HDFS

2017-10-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-12738:
-

 Summary: Namenode logs many "SSL renegotiate denied" warnings 
after enable HTTPS for HDFS
 Key: HDFS-12738
 URL: https://issues.apache.org/jira/browse/HDFS-12738
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Bharat Viswanadham


After enable HTTPS(SSL) for HDFS, the namenode log prints lots of SSL 
renegotiate denied warnings. Not sure if this is caused by a similar reason 
like YARN-6797. This ticket is opened to investigate and fix this. 
 
{code}
2017-10-24 07:55:41,083 WARN  mortbay.log (Slf4jLog.java:warn(76)) - SSL 
renegotiate denied: java.nio.channels.SocketChannel[connected 
local=/192.168.64.101:50470 remote=/192.168.64.101:48365]
2017-10-24 07:55:50,075 WARN  mortbay.log (Slf4jLog.java:warn(76)) - SSL 
renegotiate denied: java.nio.channels.SocketChannel[connected 
local=/192.168.64.101:50470 remote=/192.168.64.101:48373]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12737) Thousands of sockets lingering in TIME_WAIT state due to frequent file open operations

2017-10-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12737:
--

 Summary: Thousands of sockets lingering in TIME_WAIT state due to 
frequent file open operations
 Key: HDFS-12737
 URL: https://issues.apache.org/jira/browse/HDFS-12737
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ipc
 Environment: CDH5.10.2, HBase Multi-WAL=2, 250 replication peers
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


On a HBase cluster we found HBase RegionServers have thousands of sockets in 
TIME_WAIT state. It depleted system resources and caused other services to fail.

After months of troubleshooting, we found the issue is the cluster has hundreds 
of replication peers, and has multi-WAL = 2. That creates hundreds of 
replication threads in HBase RS, and each thread opens WAL file *every second*.

We found that the IPC client closes socket right away, and does not reuse 
socket connection. Since each closed socket stays in TIME_WAIT state for 60 
seconds in Linux by default, that generates thousands of TIME_WAIT sockets.

{code:title=ClientDatanodeProtocolTranslatorPB:createClientDatanodeProtocolProxy}
// Since we're creating a new UserGroupInformation here, we know that no
// future RPC proxies will be able to re-use the same connection. And
// usages of this proxy tend to be one-off calls.
//
// This is a temporary fix: callers should really achieve this by using
// RPC.stopProxy() on the resulting object, but this is currently not
// working in trunk. See the discussion on HDFS-1965.
Configuration confWithNoIpcIdle = new Configuration(conf);
confWithNoIpcIdle.setInt(CommonConfigurationKeysPublic
.IPC_CLIENT_CONNECTION_MAXIDLETIME_KEY, 0);
{code}
Unfortunately, given the HBase's usage pattern, this hack creates the problem.

Ignoring the fact that having hundreds of HBase replication peers is a bad 
practice (I'll probably file a HBASE jira to fix that), the fact that Hadoop 
IPC client does not reuse socket seems not right. The relevant code is 
historical and deep in the stack, so I'd like to invite comments. I have a 
patch but it's pretty hacky.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12736) Enable SSL Documentation

2017-10-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12736:
-

 Summary: Enable SSL Documentation
 Key: HDFS-12736
 URL: https://issues.apache.org/jira/browse/HDFS-12736
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Hadoop Docs have SSL configuration, but no where it has docs for SSL setup.
This jira is to create a document steps to enable SSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: HDFS Pre-commit taking more than 24 hours.

2017-10-27 Thread Sean Busbey
The ASF Jenkins farm has been under duress lately, at least in part due to
HDFS-12711.

I'd say wait for work on containing that issue to finish and then take a
look at if it's backlog or the test itself so you can determine if we need
to talk on builds@a.o or here.


On Fri, Oct 27, 2017 at 10:26 AM, Rushabh Shah 
wrote:

> Hi All,
> HDFS Pre-commit is taking more than 24 hours to run unit tests.
> Currently at (Fri Oct 27 15:17:18 UTC 2017), it is running patch submitted
> at Thursday Oct 26 07:43 UTC 2017 (HDFS-12582
> )
> Last 5-6 pre-commits ran on only 1 slave: asf903.gq1.ygridcore.net
> Is anyone else facing this issue ?
>
>
>
> Thanks,
> Rushabh Shah
>



-- 
busbey


[jira] [Created] (HDFS-12735) Make ContainerStateMachine#applyTransaction async

2017-10-27 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDFS-12735:
--

 Summary: Make ContainerStateMachine#applyTransaction async
 Key: HDFS-12735
 URL: https://issues.apache.org/jira/browse/HDFS-12735
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently ContainerStateMachine#applyTransaction makes a synchronous call to 
dispatch client requests. Idea is to have a thread pool which dispatches client 
requests and returns a CompletableFuture.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



HDFS Pre-commit taking more than 24 hours.

2017-10-27 Thread Rushabh Shah
Hi All,
HDFS Pre-commit is taking more than 24 hours to run unit tests.
Currently at (Fri Oct 27 15:17:18 UTC 2017), it is running patch submitted
at Thursday Oct 26 07:43 UTC 2017 (HDFS-12582
)
Last 5-6 pre-commits ran on only 1 slave: asf903.gq1.ygridcore.net
Is anyone else facing this issue ?



Thanks,
Rushabh Shah


[jira] [Created] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-10-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12734:
---

 Summary: Ozone: generate version specific documentation during the 
build
 Key: HDFS-12734
 URL: https://issues.apache.org/jira/browse/HDFS-12734
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


HDFS-12664 susggested a new way to include documentation in the KSM web ui.

This patch modifies the build lifecycle to automatically generate the 
documentation *if* hugo is on the PATH. If hugo is not there  the documentation 
won't be generated and it won't be displayed (see HDFS-12661)

To test: Apply this patch on top of HDFS-12664 do a full build and check the 
KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12733) Option to disable to namenode local edits

2017-10-27 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-12733:
---

 Summary: Option to disable to namenode local edits
 Key: HDFS-12733
 URL: https://issues.apache.org/jira/browse/HDFS-12733
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


As of now, Edits will be written in local and shared locations which will be 
redundant and local edits never used in HA setup.

Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org