Re: Jenkins precommit build for HDFS failing

2018-09-27 Thread Duo Zhang
Solved by https://issues.apache.org/jira/browse/INFRA-17068.

张铎(Duo Zhang)  于2018年9月28日周五 上午9:37写道:

> I tried to upgrade the yetus version for hbase pre commit job and the
> error message is the same with hadoop then. And the problem is that when
> fetching the patch we get a 4xx, so we just apply a nonexistent file...
>
> jira_http_fetch: https://issues.apache.org/jira/browse/HBASE-21233 returned 
> 4xx status code. Maybe incorrect username/password?
>
>
> Ted Yu  于2018年9月28日周五 上午6:20写道:
>
>> Over in hbase precommit, I saw this:
>> https://builds.apache.org/job/PreCommit-HBASE-Build/14514/console
>>
>> Resolving deltas:  86% (114758/133146), completed with 87 local
>> objects.*21:06:47* fatal: pack has 18388 unresolved deltas*21:06:47*
>> fatal: index-pack failed*21:06:47* *21:06:47*   at
>>
>> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)*21:06:47*
>> at
>> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1721)*21:06:47*
>> at
>> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)
>>
>>
>> It seems QA machine(s) may have trouble accessing git.
>>
>>
>> I wonder if the 'index-pack failed' error would lead to patch not
>> being recognized.
>>
>>
>> FYI
>>
>>
>> On Thu, Sep 27, 2018 at 3:02 PM Ajay Kumar 
>> wrote:
>>
>> > Hi,
>> >
>> > Jenkins precommit build for HDFS is failing with error that patch
>> doesn’t
>> > apply to trunk, even when patch applies to trunk.
>> > I see other build failures with same error for other patches as well.
>> > Wanted to reach out to know if it is a known issue.
>> >
>> >
>> > Vote
>> >
>> > Subsystem
>> >
>> > Runtime
>> >
>> > Comment
>> >
>> > 0
>> >
>> > reexec
>> >
>> > 0m 0s
>> >
>> > Docker mode activated.
>> >
>> > -1
>> >
>> > patch
>> >
>> > 0m 5s
>> >
>> > HDFS-13941 does not apply to trunk. Rebase required? Wrong Branch? See
>> > https://wiki.apache.org/hadoop/HowToContribute for help.
>> >
>> >
>> > Subsystem
>> >
>> > Report/Notes
>> >
>> > JIRA Issue
>> >
>> > HDFS-13941
>> >
>> > Console output
>> >
>> > https://builds.apache.org/job/PreCommit-HDFS-Build/25154/console
>> >
>> > Powered by
>> >
>> > Apache Yetus 0.8.0 http://yetus.apache.org
>> >
>> >
>> >
>> > -1 overall
>> >
>> >
>> > Vote
>> >
>> > Subsystem
>> >
>> > Runtime
>> >
>> > Comment
>> >
>> > 0
>> >
>> > reexec
>> >
>> > 0m 0s
>> >
>> > Docker mode activated.
>> >
>> > -1
>> >
>> > patch
>> >
>> > 0m 4s
>> >
>> > HDFS-13877 does not apply to trunk. Rebase required? Wrong Branch? See
>> > https://wiki.apache.org/hadoop/HowToContribute for help.
>> >
>> >
>> > Subsystem
>> >
>> > Report/Notes
>> >
>> > JIRA Issue
>> >
>> > HDFS-13877
>> >
>> > Console output
>> >
>> > https://builds.apache.org/job/PreCommit-HDFS-Build/25155/console
>> >
>> > Powered by
>> >
>> > Apache Yetus 0.8.0 http://yetus.apache.org
>> >
>> >
>> >
>> >
>> >
>> > Thanks,
>> > Ajay Kumar
>> >
>>
>


Re: Jenkins precommit build for HDFS failing

2018-09-27 Thread Duo Zhang
I tried to upgrade the yetus version for hbase pre commit job and the error
message is the same with hadoop then. And the problem is that when fetching
the patch we get a 4xx, so we just apply a nonexistent file...

jira_http_fetch: https://issues.apache.org/jira/browse/HBASE-21233
returned 4xx status code. Maybe incorrect username/password?


Ted Yu  于2018年9月28日周五 上午6:20写道:

> Over in hbase precommit, I saw this:
> https://builds.apache.org/job/PreCommit-HBASE-Build/14514/console
>
> Resolving deltas:  86% (114758/133146), completed with 87 local
> objects.*21:06:47* fatal: pack has 18388 unresolved deltas*21:06:47*
> fatal: index-pack failed*21:06:47* *21:06:47*   at
>
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)*21:06:47*
> at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1721)*21:06:47*
> at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)
>
>
> It seems QA machine(s) may have trouble accessing git.
>
>
> I wonder if the 'index-pack failed' error would lead to patch not
> being recognized.
>
>
> FYI
>
>
> On Thu, Sep 27, 2018 at 3:02 PM Ajay Kumar 
> wrote:
>
> > Hi,
> >
> > Jenkins precommit build for HDFS is failing with error that patch doesn’t
> > apply to trunk, even when patch applies to trunk.
> > I see other build failures with same error for other patches as well.
> > Wanted to reach out to know if it is a known issue.
> >
> >
> > Vote
> >
> > Subsystem
> >
> > Runtime
> >
> > Comment
> >
> > 0
> >
> > reexec
> >
> > 0m 0s
> >
> > Docker mode activated.
> >
> > -1
> >
> > patch
> >
> > 0m 5s
> >
> > HDFS-13941 does not apply to trunk. Rebase required? Wrong Branch? See
> > https://wiki.apache.org/hadoop/HowToContribute for help.
> >
> >
> > Subsystem
> >
> > Report/Notes
> >
> > JIRA Issue
> >
> > HDFS-13941
> >
> > Console output
> >
> > https://builds.apache.org/job/PreCommit-HDFS-Build/25154/console
> >
> > Powered by
> >
> > Apache Yetus 0.8.0 http://yetus.apache.org
> >
> >
> >
> > -1 overall
> >
> >
> > Vote
> >
> > Subsystem
> >
> > Runtime
> >
> > Comment
> >
> > 0
> >
> > reexec
> >
> > 0m 0s
> >
> > Docker mode activated.
> >
> > -1
> >
> > patch
> >
> > 0m 4s
> >
> > HDFS-13877 does not apply to trunk. Rebase required? Wrong Branch? See
> > https://wiki.apache.org/hadoop/HowToContribute for help.
> >
> >
> > Subsystem
> >
> > Report/Notes
> >
> > JIRA Issue
> >
> > HDFS-13877
> >
> > Console output
> >
> > https://builds.apache.org/job/PreCommit-HDFS-Build/25155/console
> >
> > Powered by
> >
> > Apache Yetus 0.8.0 http://yetus.apache.org
> >
> >
> >
> >
> >
> > Thanks,
> > Ajay Kumar
> >
>


Re: Jenkins precommit build for HDFS failing

2018-09-27 Thread Ted Yu
Over in hbase precommit, I saw this:
https://builds.apache.org/job/PreCommit-HBASE-Build/14514/console

Resolving deltas:  86% (114758/133146), completed with 87 local
objects.*21:06:47* fatal: pack has 18388 unresolved deltas*21:06:47*
fatal: index-pack failed*21:06:47* *21:06:47*   at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)*21:06:47*
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1721)*21:06:47*
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)


It seems QA machine(s) may have trouble accessing git.


I wonder if the 'index-pack failed' error would lead to patch not
being recognized.


FYI


On Thu, Sep 27, 2018 at 3:02 PM Ajay Kumar 
wrote:

> Hi,
>
> Jenkins precommit build for HDFS is failing with error that patch doesn’t
> apply to trunk, even when patch applies to trunk.
> I see other build failures with same error for other patches as well.
> Wanted to reach out to know if it is a known issue.
>
>
> Vote
>
> Subsystem
>
> Runtime
>
> Comment
>
> 0
>
> reexec
>
> 0m 0s
>
> Docker mode activated.
>
> -1
>
> patch
>
> 0m 5s
>
> HDFS-13941 does not apply to trunk. Rebase required? Wrong Branch? See
> https://wiki.apache.org/hadoop/HowToContribute for help.
>
>
> Subsystem
>
> Report/Notes
>
> JIRA Issue
>
> HDFS-13941
>
> Console output
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/25154/console
>
> Powered by
>
> Apache Yetus 0.8.0 http://yetus.apache.org
>
>
>
> -1 overall
>
>
> Vote
>
> Subsystem
>
> Runtime
>
> Comment
>
> 0
>
> reexec
>
> 0m 0s
>
> Docker mode activated.
>
> -1
>
> patch
>
> 0m 4s
>
> HDFS-13877 does not apply to trunk. Rebase required? Wrong Branch? See
> https://wiki.apache.org/hadoop/HowToContribute for help.
>
>
> Subsystem
>
> Report/Notes
>
> JIRA Issue
>
> HDFS-13877
>
> Console output
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/25155/console
>
> Powered by
>
> Apache Yetus 0.8.0 http://yetus.apache.org
>
>
>
>
>
> Thanks,
> Ajay Kumar
>


Jenkins precommit build for HDFS failing

2018-09-27 Thread Ajay Kumar
Hi,

Jenkins precommit build for HDFS is failing with error that patch doesn’t apply 
to trunk, even when patch applies to trunk.
I see other build failures with same error for other patches as well. Wanted to 
reach out to know if it is a known issue.


Vote

Subsystem

Runtime

Comment

0

reexec

0m 0s

Docker mode activated.

-1

patch

0m 5s

HDFS-13941 does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.


Subsystem

Report/Notes

JIRA Issue

HDFS-13941

Console output

https://builds.apache.org/job/PreCommit-HDFS-Build/25154/console

Powered by

Apache Yetus 0.8.0 http://yetus.apache.org



-1 overall


Vote

Subsystem

Runtime

Comment

0

reexec

0m 0s

Docker mode activated.

-1

patch

0m 4s

HDFS-13877 does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.


Subsystem

Report/Notes

JIRA Issue

HDFS-13877

Console output

https://builds.apache.org/job/PreCommit-HDFS-Build/25155/console

Powered by

Apache Yetus 0.8.0 http://yetus.apache.org





Thanks,
Ajay Kumar


[jira] [Created] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-13945:
---

 Summary: TestDataNodeVolumeFailure is Flaky
 Key: HDFS-13945
 URL: https://issues.apache.org/jira/browse/HDFS-13945
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


The test is failing in trunk since long.

Reference -

[https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

Stack Trace -

 

Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server handler 
3 on 34285" daemon prio=5 tid=2646 timed_waiting java.lang.Thread.State: 
TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 
org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) at 
java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
"org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748) 
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63818b03"
 daemon prio=5 tid=2521 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4045)
 at java.lang.Thread.run(Thread.java:748) 
"BP-1973654218-172.17.0.2-1537975830395 heartbeating to 
localhost/127.0.0.1:43522" daemon prio=5 tid=2640 timed_waiting 
java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) 
at 
org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158)
 at 

RE: NN run progressively slower

2018-09-27 Thread Brahma Reddy Battula
Looks you are using apache-hadoop 2.6.


a) Are there any promotion failures are there..?  can be checked using 
following.

-XX:+PrintGCDetails -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1

b) SurvivorRatio is configured..? can you give all the GC params configured.? 
Even GC logs.?
the option -XX:+PrintTenuringDistribution to show the threshold and ages of the 
objects in the new generation. It is useful for observing the lifetime 
distribution of an application.

Periodic heapdump could have helped.

Did you crossed checked the following also.

i) NN and JN have dedicated disk's to for writing edit log transactions.
ii) " dfs.namenode.accesstime.precision" should n't be set low value,NN will 
update last accessed time for each file
iii) Group look up isn't taking more time,as nn will check for every request.
iv) NN log for Excessive Spew ( there are some jira's which are fixed later to 
2.6.0) like decommission nodes,...
v) Any recursive calls are very frequently?



-Original Message-
From: Lin,Yiqun(vip.com) [mailto:yiqun01@vipshop.com] 
Sent: Wednesday, September 26, 2018 10:26 AM
To: Wei-Chiu Chuang 
Cc: Hdfs-dev 
Subject: 答复: NN run progressively slower

Hi Wei-Chiu,

At the beginning, we have noted HDFS-9260 which changing the structure of the 
stored blocks. With multiple adding/removing under this structure, there will 
be a performance degradation.
But we think this will reach a stable state, and HDFS-9260 isn’t the root cause 
from our point of view.

>Did you observe clients failing to close file due to insufficient number of 
>block replicas? Did NN fail over?
No failing close file warning in client. And no NN fail over happened.

>Did you have gc logging enabled? Any chance to take a heap dump and analyze 
>what's in there?
We have enabled gc logging but haven’t take the analysis for the NN heap dump. 
From the gc log, we find average time of ygc around 0.02s but total ygc time 
within one day are increasing day by day.

>There were quite some NN scalability and GC improvements between CDH5.5 ~ 
>CDH5.8 time frame. We have customers at/beyond your scale in your version but 
>I don't think I've heard similar symptoms.
Thanks for sharing this.

Thanks
Yiqun

发件人: Wei-Chiu Chuang [mailto:weic...@cloudera.com]
发送时间: 2018年9月26日 11:14
收件人: 林意群[运营中心]
抄送: Hdfs-dev
主题: Re: NN run progressively slower

Yiqun,
Is this related to HDFS-9260?
Note that HDFS-9260 was backported since CDH5.7 and above.

I'm interested to learn more. Did you observe clients failing to close file due 
to insufficient number of block replicas? Did NN fail over?
Did you have gc logging enabled? Any chance to take a heap dump and analyze 
what's in there?

There were quite some NN scalability and GC improvements between CDH5.5 ~ 
CDH5.8 time frame. We have customers at/beyond your scale in your version but I 
don't think I've heard similar symptoms.

Regards

On Tue, Sep 25, 2018 at 2:04 AM Lin,Yiqun(vip.com) 
mailto:yiqun01@vipshop.com>> wrote:
Hi hdfs developers:

We meet a bad problem after rolling upgrade our hadoop version from 
2.5.0-cdh5.3.2 to 2.6.0-cdh5.13.1. The problem is that we find NN running slow 
periodically (around a week). Concretely to say, For example, we startup NN on 
Monday, it will run fast. But time coming to Weekends, our cluster will become 
very slow.

In the beginning, we thought maybe some FSN lock caused by this. And we did 
some improvements for this, e.g. configurable the remove block interval, print 
FSN lock elapsed time. After this, the problem still exists, :(. So we suspect 
this maybe not a hdfs rpc problem.

Finally we find a related phenomenon: every time NN runs slow, its old gen 
reaches a high value, around 100GB. Actually, NN total metadata size is just 
around 40GB in our clsuter. So for the temporary solution, we reduce the heap 
space and trigger full gc frequently. Now it looks better than before but we 
haven’t found the root cause of this. Not so sure if this is a jvm tuning 
problem or a hdfs bug?

Anyone who has met the similar problem in this version? Why the NN old gen 
space greatly increased?

Some information of our env:
JDK1.8
500+ Nodes, 150 million blocks, around 40GB metadata size will be used.

Appreciate if anyone who can share your comments.

Thanks
Yiqun.
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/

[Sep 26, 2018 3:46:23 AM] (nanda) HDDS-554. In XceiverClientSpi, implement 
sendCommand(..) using
[Sep 26, 2018 7:00:26 AM] (rohithsharmaks) YARN-8824. App Nodelabel missed 
after RM restart for finished apps.
[Sep 26, 2018 4:50:09 PM] (inigoiri) HDFS-13927. Addendum: Improve
[Sep 26, 2018 6:14:16 PM] (brahma) HDFS-13840. RBW Blocks which are having less 
GS should be added to
[Sep 26, 2018 6:51:35 PM] (eyang) YARN-8665.  Added Yarn service cancel upgrade 
option.
[Sep 26, 2018 9:43:00 PM] (jlowe) YARN-8804. resourceLimits may be wrongly 
calculated when leaf-queue is




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 123] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.containermanager.TestNMProxy 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/909/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   

[jira] [Created] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-561:


 Summary: Move Node2ContainerMap and Node2PipelineMap to NodeManager
 Key: HDDS-561
 URL: https://issues.apache.org/jira/browse/HDDS-561
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org