[jira] [Created] (HDDS-1807) TestWatchForCommit#testWatchForCommitForRetryfailure fails as a result of no leader election for extended period of time

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1807:
-

 Summary: TestWatchForCommit#testWatchForCommitForRetryfailure 
fails as a result of no leader election for extended period of time 
 Key: HDDS-1807
 URL: https://issues.apache.org/jira/browse/HDDS-1807
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
Stacktrace
java.util.concurrent.ExecutionException: 
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.hadoop.ozone.client.rpc.TestWatchForCommit.testWatchForCommitForRetryfailure(TestWatchForCommit.java:345)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}
The client here retries times with a delay of 1 sec between each retry but 
leader eleactiocouldnot complete.
{code:java}
2019-07-12 19:30:46,451 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46: receive 
RaftClientReply:client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
5931fd83-b899-480e-b15a-ecb8e7f7dd46 is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[5931fd83-b899-480e-b15a-ecb8e7f7dd46:c-1]
2019-07-12 19:30:47,469 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde: receive 
RaftClientReply:client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
d83929f1-c4db-499d-b67f-ad7f10dd7dde is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[d83929f1-c4db-499d-b67f-ad7f10dd7dde:c-1]
2019-07-12 19:30:48,504 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 

[jira] [Created] (HDDS-1806) TestDataValidateWithSafeByteOperations tests are failing

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1806:
-

 Summary: TestDataValidateWithSafeByteOperations tests are failing
 Key: HDDS-1806
 URL: https://issues.apache.org/jira/browse/HDDS-1806
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


 
{code:java}
Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist

Stacktrace
java.io.IOException: Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:549)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:540)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:615)
 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) Caused by: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:537)
 ... 7 more
{code}
The error propagated to client is erroneous. The container creation failed as a 
result disk full   condition but never propagated to client.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14655:


 Summary: SBN : Namenode crashes if one of The jN is down
 Key: HDFS-14655
 URL: https://issues.apache.org/jira/browse/HDFS-14655
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 9 
time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
sleepTime=1000 MILLISECONDS) | Client.java:975
2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at 
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
java.lang.OutOfMemoryError: unable to create new native thread | 
ExitUtil.java:210
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1805:


 Summary: Implement S3 Initiate MPU request to use Cache and 
DoubleBuffer
 Key: HDDS-1805
 URL: https://issues.apache.org/jira/browse/HDDS-1805
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Branch-2 and Tomcat

2019-07-15 Thread Wei-Chiu Chuang
Ping.
I am sending this out again because we are evaluating the possibility of
upgrading to Tomcat 7, 8 or 9.

Thoughts? We've got customers asking us to do it so I am quite serious
about this proposal.

The impact includes KMS and Httpfs. We are also looking into upgrading
Tomcat versions in other projects (Oozie, for example)

On Fri, Jun 28, 2019 at 11:47 PM Wei-Chiu Chuang 
wrote:

> During the Wednesday's meetup, we discussed the plan for the "bridge
> release" branch-2.10.
>
> A smooth upgrade path (that is, rolling upgrade) from 2.10 to Hadoop 3 was
> called out as the most critical release criteria.
>
> I am wondering if Tomcat (as well as other dependency upgrade) upgrade
> should be considered as well?
>
> We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL in
> 2016. But we did not realize three years after Tomcat 6's EOL, a majority
> of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
> alive for another few years.
>
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
> How about migrating to Tomcat9?
>
> FWIW, Tomcat 6 EOL along is big enough a reason why you should move up to
> Hadoop 3. :)
>


[jira] [Created] (HDDS-1804) TestCloseContainerHandlingByClient#estBlockWrites fails intermittently

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1804:
-

 Summary: TestCloseContainerHandlingByClient#estBlockWrites fails 
intermittently
 Key: HDDS-1804
 URL: https://issues.apache.org/jira/browse/HDDS-1804
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


The test fails intermittently as reported here:

[https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/org.apache.hadoop.ozone.client.rpc/TestCloseContainerHandlingByClient/testBlockWrites/]
{code:java}
java.lang.IllegalArgumentException
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:150)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:143)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:154)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:118)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:222)
at 
org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
at java.io.InputStream.read(InputStream.java:101)
at 
org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:709)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.validateData(TestCloseContainerHandlingByClient.java:401)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWrites(TestCloseContainerHandlingByClient.java:471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



HDFS/Cloud connector Bi-weekly online sync

2019-07-15 Thread Wei-Chiu Chuang
Hi,
We will be having the next HDFS/Cloud connector online meetup later this
week (US Pacific time Wednesday 10pm or GMT+8 1PM Thursay)

We had another online sync two weeks ago, with developers from a few
Cloudera, Didi, and JD.com, and it was a good meetup. We kept a meeting
minutes in this doc for your reference:
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing

Please respond to this email thread if you have particular agenda in mind
that you'd like to be discussed.

Looking for a host this time as I will be traveling then and chances are I
won't make it this time.

Best,
Wei-Chiu


[jira] [Created] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1803:
---

 Summary: shellcheck.sh does not work on Mac
 Key: HDDS-1803
 URL: https://issues.apache.org/jira/browse/HDDS-1803
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


# {{shellcheck.sh}} does not work on Mac
{code}
find: -executable: unknown primary or operator
{code}
# {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
{{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-15 Thread Wangda Tan
Congrats!

Best,
Wangda

On Tue, Jul 16, 2019 at 10:37 AM 杨弢(杨弢)  wrote:

> Thanks everyone.
> I'm so honored to be an Apache Hadoop Committer, I will keep working on
> this great project and contribute more. Thanks.
>
> Best Regards,
> Tao Yang
>
>
> --
> 发件人:Naganarasimha Garla 
> 发送时间:2019年7月15日(星期一) 17:55
> 收件人:Weiwei Yang 
> 抄 送:yarn-dev ; Hadoop Common <
> common-...@hadoop.apache.org>; mapreduce-dev <
> mapreduce-...@hadoop.apache.org>; Hdfs-dev 
> 主 题:Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang
>
> Congrats and welcome Tao Yang!
>
> Regards
> + Naga
>
> On Mon, 15 Jul 2019, 17:54 Weiwei Yang,  wrote:
>
> > Hi Dear Apache Hadoop Community
> >
> > It's my pleasure to announce that Tao Yang has been elected as an Apache
> > Hadoop committer, this is to recognize his contributions to Apache Hadoop
> > YARN project.
> >
> > Congratulations and welcome on board!
> >
> > Weiwei
> > (On behalf of the Apache Hadoop PMC)
> >
>
>


[jira] [Created] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-07-15 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14654:
---

 Summary: RBF: TestRouterRpc tests are flaky
 Key: HDFS-14654
 URL: https://issues.apache.org/jira/browse/HDFS-14654
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma


They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Ayush Saxena

2019-07-15 Thread Takanobu Asanuma
Congrats Ayush!

Regards,
- Takanobu


From: Dinesh Chitlangia 
Sent: Monday, July 15, 2019 23:18
To: lqjacklee
Cc: HarshaKiran Reddy Boreddy; Vinayakumar B; ayushsax...@apache.org; Hdfs-dev; 
Hadoop Common
Subject: Re: [ANNOUNCE] New Apache Hadoop Committer - Ayush Saxena

Congratulations Ayush!

Cheers,
Dinesh




On Mon, Jul 15, 2019 at 8:48 AM lqjacklee  wrote:

> congratulations.
>
> On Mon, Jul 15, 2019 at 6:09 PM HarshaKiran Reddy Boreddy <
> bharsh...@gmail.com> wrote:
>
> > Congratulations Ayush!!!
> >
> >
> > -- Harsha
> >
> > On Mon, Jul 15, 2019, 2:15 PM Vinayakumar B 
> > wrote:
> >
> > > In bcc: general@, please bcc: (and not cc:) general@ if you want to
> > > include
> > >
> > > It's my pleasure to announce that Ayush Saxena has been elected as
> > > committer
> > > on the Apache Hadoop project recognising his continued contributions to
> > the
> > > project.
> > >
> > > Please join me in congratulating him.
> > >
> > > Hearty Congratulations & Welcome aboard Ayush!
> > >
> > >
> > > Vinayakumar B
> > > (On behalf of the Hadoop PMC)
> > >
> >
>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1802:


 Summary: Add Eviction policy for table cache
 Key: HDDS-1802
 URL: https://issues.apache.org/jira/browse/HDDS-1802
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In this Jira we will add eviction policy for table cache.

In this Jira, we will add 2 eviction policies for the cache.

NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
cache.
AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14653) RBF: Correct the default value for dfs.federation.router.namenode.heartbeat.enable

2019-07-15 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14653:
---

 Summary: RBF: Correct the default value for 
dfs.federation.router.namenode.heartbeat.enable
 Key: HDFS-14653
 URL: https://issues.apache.org/jira/browse/HDFS-14653
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ayush Saxena
Assignee: Ayush Saxena


dfs.federation.router.namenode.heartbeat.enable is suppose to take the value of 
dfs.federation.router.heartbeat.enable when it isn't explicitly specified. by ::


{noformat}
boolean isRouterHeartbeatEnabled = conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT);
boolean isNamenodeHeartbeatEnable = conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_NAMENODE_HEARTBEAT_ENABLE,
isRouterHeartbeatEnabled);
{noformat}

But since now RBF-Defaults are added by default, this logic doesn't hold, and 
the value defaults to true.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1801) Make Topology Aware Replication/Read non-default for ozone 0.4.1

2019-07-15 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1801:


 Summary: Make Topology Aware Replication/Read non-default for 
ozone 0.4.1   
 Key: HDDS-1801
 URL: https://issues.apache.org/jira/browse/HDDS-1801
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This helps stablize the ozone-0.4.1 release and fix HDDS-1705, HDDS-1751, 
HDDS-1713 and HDDS-1770 for 0.5. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/

[Jul 14, 2019 5:23:51 AM] (github) HDDS-1766. ContainerStateMachine is unable 
to increment




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestWatchForCommit 
   hadoop.ozone.client.rpc.TestSecureOzoneRpcClient 
   hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-patch-pylint.txt
  [212K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-tony-runtime.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [336K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1198/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   

[jira] [Resolved] (HDFS-13835) RBF: Unable to add files after changing the order

2019-07-15 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-13835.

Resolution: Duplicate

> RBF: Unable to add files after changing the order
> -
>
> Key: HDFS-13835
> URL: https://issues.apache.org/jira/browse/HDFS-13835
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: venkata ramkumar
>Assignee: venkata ramkumar
>Priority: Critical
>
> When  a mount point it pointing to multiple sub cluster by default the order 
> is HASH.
> But After changing the order from HASH to RANDOM i am unable to add files to 
> that mount point.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-15 Thread Chen Zhang (JIRA)
Chen Zhang created HDFS-14652:
-

 Summary: HealthMonitor connection retry times should be 
configurable
 Key: HDFS-14652
 URL: https://issues.apache.org/jira/browse/HDFS-14652
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Zhang


On our production HDFS cluster, some client's burst requests cause the tcp 
kernel queue full on NameNode's host,  since the configuration value of 
"net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
ZooKeeper Healthmonitor got an connection error like this:
{code:java}
WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
monitor health of NameNode at nn_host_name/ip_address:port: Call From 
zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
java.net.ConnectException: Connection timed out; For more details see: 
http://wiki.apache.org/hadoop/ConnectionRefused
{code}
This error caused a failover and affects the availability of that cluster, we 
fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6

But during working on this issue, we found that the connection retry 
time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
think it should be configurable, then if we don't want the health-monitor so 
sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9499) Fix typos in DFSAdmin.java

2019-07-15 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HDFS-9499.

Resolution: Invalid

Looks like it's already been resolved by another JIRA.

> Fix typos in DFSAdmin.java
> --
>
> Key: HDFS-9499
> URL: https://issues.apache.org/jira/browse/HDFS-9499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Daniel Green
>Priority: Major
>
> There are multiple instances of 'snapshot' spelled as 'snaphot' in 
> DFSAdmin.java and TestSnapshotCommands.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Ayush Saxena

2019-07-15 Thread Dinesh Chitlangia
Congratulations Ayush!

Cheers,
Dinesh




On Mon, Jul 15, 2019 at 8:48 AM lqjacklee  wrote:

> congratulations.
>
> On Mon, Jul 15, 2019 at 6:09 PM HarshaKiran Reddy Boreddy <
> bharsh...@gmail.com> wrote:
>
> > Congratulations Ayush!!!
> >
> >
> > -- Harsha
> >
> > On Mon, Jul 15, 2019, 2:15 PM Vinayakumar B 
> > wrote:
> >
> > > In bcc: general@, please bcc: (and not cc:) general@ if you want to
> > > include
> > >
> > > It's my pleasure to announce that Ayush Saxena has been elected as
> > > committer
> > > on the Apache Hadoop project recognising his continued contributions to
> > the
> > > project.
> > >
> > > Please join me in congratulating him.
> > >
> > > Hearty Congratulations & Welcome aboard Ayush!
> > >
> > >
> > > Vinayakumar B
> > > (On behalf of the Hadoop PMC)
> > >
> >
>


Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-15 Thread Dinesh Chitlangia
Congratulations Tao!

Cheers,
Dinesh




On Mon, Jul 15, 2019 at 5:55 AM Naganarasimha Garla <
naganarasimha...@apache.org> wrote:

> Congrats and welcome Tao Yang!
>
> Regards
> + Naga
>
> On Mon, 15 Jul 2019, 17:54 Weiwei Yang,  wrote:
>
> > Hi Dear Apache Hadoop Community
> >
> > It's my pleasure to announce that Tao Yang has been elected as an Apache
> > Hadoop committer, this is to recognize his contributions to Apache Hadoop
> > YARN project.
> >
> > Congratulations and welcome on board!
> >
> > Weiwei
> > (On behalf of the Apache Hadoop PMC)
> >
>


[jira] [Created] (HDDS-1800) Result of author check is inverted

2019-07-15 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1800:
--

 Summary: Result of author check is inverted
 Key: HDDS-1800
 URL: https://issues.apache.org/jira/browse/HDDS-1800
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


## What changes were proposed in this pull request?

Fix:

 1. author check fails when no violations are found
 2. author check violations are duplicated in the output

Eg. https://ci.anzix.net/job/ozone-nightly/173/consoleText says that:


{code:java}
The following tests are FAILED:

[author]: author check is failed 
(https://ci.anzix.net/job/ozone-nightly/173//artifact/build/author.out/*view*/){code}


but no actual `@author` tags were found:

```
$ curl -s 
'https://ci.anzix.net/job/ozone-nightly/173//artifact/build/author.out/*view*/' 
| wc
   0   0   0
```

## How was this patch tested?

{code}
$ bash -o pipefail -c 'hadoop-ozone/dev-support/checks/author.sh | tee 
build/author.out'; echo $?
0

$ wc build/author.out
   0   0   0 build/author.out

$ echo '// @author Tolkien' >> 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManager.java

$ bash -o pipefail -c 'hadoop-ozone/dev-support/checks/author.sh | tee 
build/author.out'; echo $?
./hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManager.java://
 @author Tolkien
1

$ wc build/author.out
   1   3 108 build/author.out
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14651) DeadNodeDetector periodically detects Dead Node

2019-07-15 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14651:
--

 Summary: DeadNodeDetector periodically detects Dead Node
 Key: HDFS-14651
 URL: https://issues.apache.org/jira/browse/HDFS-14651
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lisheng Sun






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14650) Re-probing Suspicious Node

2019-07-15 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14650:
--

 Summary: Re-probing Suspicious Node
 Key: HDFS-14650
 URL: https://issues.apache.org/jira/browse/HDFS-14650
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lisheng Sun






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14649) DataNode will be placed in DeadNodeDetector#SuspiciousNode when InputStream which access it occurs an error

2019-07-15 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14649:
--

 Summary: DataNode will be placed in 
DeadNodeDetector#SuspiciousNode when InputStream which access it occurs an 
error 
 Key: HDFS-14649
 URL: https://issues.apache.org/jira/browse/HDFS-14649
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lisheng Sun
Assignee: Lisheng Sun






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14648) Create DeadNodeDetector state machine model

2019-07-15 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14648:
--

 Summary: Create DeadNodeDetector state machine model
 Key: HDFS-14648
 URL: https://issues.apache.org/jira/browse/HDFS-14648
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lisheng Sun
Assignee: Lisheng Sun






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.registry.secure.TestSecureLogins 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.TestLargeSort 
   hadoop.mapreduce.lib.join.TestJoinDatamerge 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/383/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [240K]
   

Re: [ANNOUNCE] New Apache Hadoop Committer - Ayush Saxena

2019-07-15 Thread lqjacklee
congratulations.

On Mon, Jul 15, 2019 at 6:09 PM HarshaKiran Reddy Boreddy <
bharsh...@gmail.com> wrote:

> Congratulations Ayush!!!
>
>
> -- Harsha
>
> On Mon, Jul 15, 2019, 2:15 PM Vinayakumar B 
> wrote:
>
> > In bcc: general@, please bcc: (and not cc:) general@ if you want to
> > include
> >
> > It's my pleasure to announce that Ayush Saxena has been elected as
> > committer
> > on the Apache Hadoop project recognising his continued contributions to
> the
> > project.
> >
> > Please join me in congratulating him.
> >
> > Hearty Congratulations & Welcome aboard Ayush!
> >
> >
> > Vinayakumar B
> > (On behalf of the Hadoop PMC)
> >
>


[jira] [Resolved] (HDDS-1036) container replica state in datanode should be QUASI-CLOSED if the datanode is isolated from other two datanodes in 3 datanode cluster

2019-07-15 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-1036.
---
Resolution: Not A Problem

Fixed as part of ReplicationManager refactoring.

> container replica state in datanode should be QUASI-CLOSED if the datanode is 
> isolated from other two datanodes in 3 datanode cluster
> -
>
> Key: HDDS-1036
> URL: https://issues.apache.org/jira/browse/HDDS-1036
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>
> steps taken :
> ---
>  # created a 3 datanode docker cluster.
>  # wrote some data to create a pipeline.
>  # Then, one of the datanodes is isolated from other two datanodes. All 
> datanodes can communicate with SCM.
>  # Tried to write new data , write failed.
>  # Wait for 900 seconds.
> Observation:
> 
> container state is CLOSED in all three replicas.
>  
> Expectation:
> ---
> container state in isolated datanode should be QUASI-CLOSED.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1799) Add goofyfs to the ozone-runner docker image

2019-07-15 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1799:
--

 Summary: Add goofyfs to the ozone-runner docker image
 Key: HDDS-1799
 URL: https://issues.apache.org/jira/browse/HDDS-1799
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


Goofys is a s3 fuse driver which is required for the ozone csi setup.

As of now it's installed in hadoop-ozone/dist/src/main/docker/Dockerfile from a 
non-standard location (because it couldn't be part of hadoop-runner earlier as 
it's ozone specific).

It should be installed to the ozone-runner from a canonical goffys release URL.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-15 Thread Naganarasimha Garla
Congrats and welcome Tao Yang!

Regards
+ Naga

On Mon, 15 Jul 2019, 17:54 Weiwei Yang,  wrote:

> Hi Dear Apache Hadoop Community
>
> It's my pleasure to announce that Tao Yang has been elected as an Apache
> Hadoop committer, this is to recognize his contributions to Apache Hadoop
> YARN project.
>
> Congratulations and welcome on board!
>
> Weiwei
> (On behalf of the Apache Hadoop PMC)
>


[ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-15 Thread Weiwei Yang
Hi Dear Apache Hadoop Community

It's my pleasure to announce that Tao Yang has been elected as an Apache
Hadoop committer, this is to recognize his contributions to Apache Hadoop
YARN project.

Congratulations and welcome on board!

Weiwei
(On behalf of the Apache Hadoop PMC)


[jira] [Created] (HDDS-1798) Propagate failure in writeStateMachineData to Ratis

2019-07-15 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1798:
---

 Summary: Propagate failure in writeStateMachineData to Ratis
 Key: HDDS-1798
 URL: https://issues.apache.org/jira/browse/HDDS-1798
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Supratim Deka
Assignee: Supratim Deka


Currently, 

writeStateMachineData() returns a future to Ratis. This future does not track 
any errors or failures encountered as part of the operation - WriteChunk / 
handleWriteChunk(). The error is propagated back to the client in the form of 
an error code embedded inside writeChunkResponseProto. But the error goes 
undetected and unhandled in the Ratis server. The future handed back to Ratis 
is always completed with success.

The goal is to detect any errors in writeStateMachineData in Ratis and treat is 
as a failure

of the Ratis log. Handling for which is already implemented in HDDS-1603. 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[ANNOUNCE] New Apache Hadoop Committer - Ayush Saxena

2019-07-15 Thread Vinayakumar B
In bcc: general@, please bcc: (and not cc:) general@ if you want to include

It's my pleasure to announce that Ayush Saxena has been elected as
committer
on the Apache Hadoop project recognising his continued contributions to the
project.

Please join me in congratulating him.

Hearty Congratulations & Welcome aboard Ayush!


Vinayakumar B
(On behalf of the Hadoop PMC)


[jira] [Created] (HDDS-1797) Add per volume operation metrics in datanode dispatcher

2019-07-15 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1797:
---

 Summary: Add per volume operation metrics in datanode dispatcher
 Key: HDDS-1797
 URL: https://issues.apache.org/jira/browse/HDDS-1797
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


Add per volume metrics in Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14647) NPE during secure namenode startup

2019-07-15 Thread Fengnan Li (JIRA)
Fengnan Li created HDFS-14647:
-

 Summary: NPE during secure namenode startup
 Key: HDFS-14647
 URL: https://issues.apache.org/jira/browse/HDFS-14647
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.8.2
Reporter: Fengnan Li
Assignee: Fengnan Li


In secure HDFS, during Namenode loading fsimage, when hitting Namenode through 
the REST API, below exception would be thrown out. (This is in version 2.8.2)
{quote}org.apache.hadoop.hdfs.web.resources.ExceptionHandler: 
INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
 at 
org.apache.hadoop.hdfs.server.common.JspHelper.getTokenUGI(JspHelper.java:283)
 at org.apache.hadoop.hdfs.server.common.JspHelper.getUGI(JspHelper.java:226)
 at 
org.apache.hadoop.hdfs.web.resources.UserProvider.getValue(UserProvider.java:54)
 at 
org.apache.hadoop.hdfs.web.resources.UserProvider.getValue(UserProvider.java:42)
 at 
com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
 at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
 at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
 at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:87)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1353)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
 at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{quote}
This is because during this phase, namesystem hasn't been initialized. In 
non-HA context, it can throw a RetriableException to let