[jira] [Work logged] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2250?focusedWorklogId=323881&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323881
 ]

ASF GitHub Bot logged work on HDDS-2250:


Author: ASF GitHub Bot
Created on: 05/Oct/19 06:13
Start Date: 05/Oct/19 06:13
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1597: HDDS-2250. 
Generated configs missing from ozone-filesystem-lib jars
URL: https://github.com/apache/hadoop/pull/1597#issuecomment-538620916
 
 
   Thanks @elek for help in finding the root cause.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323881)
Time Spent: 1h 10m  (was: 1h)

> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2239:
---
Affects Version/s: 0.5.0

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2239:
---
Status: Patch Available  (was: In Progress)

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2256) Checkstyle issues in CheckSumByteBuffer.java

2019-10-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2256.

Resolution: Fixed

> Checkstyle issues in CheckSumByteBuffer.java
> 
>
> Key: HDDS-2256
> URL: https://issues.apache.org/jira/browse/HDDS-2256
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Priority: Major
>  Labels: newbie
>
> HDDS-, added some checkstyle failures in CheckSumByteBuffer.java. This 
> JIRA is to track and fix those checkstyle issues.
> {code}
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: child has incorrect indentation level 8, expected level should be 6.
>  102: child has incorrect indentation level 8, expected level should be 6.
>  103:  child has incorrect indentation level 8, expected level should be 6.
>  104:  child has incorrect indentation level 8, expected level should be 6.
>  105: child has incorrect indentation level 8, expected level should be 6.
>  106:  child has incorrect indentation level 8, expected level should be 6.
>  107:  child has incorrect indentation level 8, expected level should be 6.
>  108: child has incorrect indentation level 8, expected level should be 6.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323874&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323874
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 05/Oct/19 05:18
Start Date: 05/Oct/19 05:18
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1600: HDDS-2239. Fix 
TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-538617721
 
 
   ```
   [INFO] Running org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs
   [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
32.745 s - in org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs
   ```
   
   
https://github.com/elek/ozone-ci-q4/blob/07087a6f1beec835a848cf7ee587509107334626/pr/pr-hdds-2239-t668n/integration/output.log#L2893-L2894
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323874)
Time Spent: 1h  (was: 50m)

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323873&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323873
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 05/Oct/19 05:17
Start Date: 05/Oct/19 05:17
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1600: HDDS-2239. Fix 
TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-538617721
 
 
   ```
   [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
80.155 s - in org.apache.hadoop.fs.ozone.TestOzoneFSInputStream
   [INFO] Running org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs
   ```
   
   
https://github.com/elek/ozone-ci-q4/blob/07087a6f1beec835a848cf7ee587509107334626/pr/pr-hdds-2239-t668n/integration/output.log#L2892
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323873)
Time Spent: 50m  (was: 40m)

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14857) FS operations fail in HA mode: DataNode fails to connect to NameNode

2019-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14857:
---
Status: Patch Available  (was: Open)

> FS operations fail in HA mode: DataNode fails to connect to NameNode
> 
>
> Key: HDFS-14857
> URL: https://issues.apache.org/jira/browse/HDFS-14857
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Jeff Saremi
>Priority: Major
>
> In an HA configuration, if the NameNodes get restarted and if they're 
> assigned new IP addresses, any client FS operation such as a copyFromLocal 
> will fail with a message like the following:
> {{2019-09-12 18:47:30,544 WARN hdfs.DataStreamer: DataStreamer 
> Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /tmp/init.sh._COPYING_ could only be written to 0 of the 1 minReplication 
> nodes. There are 2 datanode(s) running and 2 node(s) are excluded in this 
> operation.    at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2211)
>  ...}}
>  
> Looking at DataNode's stderr shows the following:
>  * The heartbeat service detects the IP change and recovers (almost)
>  * At this stage, an *hdfs dfsadmin -report* reports all datanodes correctly
>  * Once the write begins, the follwoing exception shows up in the datanode 
> log: *no route to host*
> {{2019-09-12 01:35:11,251 WARN datanode.DataNode: IOException in 
> offerService2019-09-12 01:35:11,251 WARN datanode.DataNode: IOException in 
> offerServicejava.io.EOFException: End of File Exception between local host 
> is: "storage-0-0.storage-0-svc.test.svc.cluster.local/10.244.0.211"; 
> destination host is: "nmnode-0-0.nmnode-0-svc.test.svc.cluster.local":9000; : 
> java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789) at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1491) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1388) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  at com.sun.proxy.$Proxy17.sendHeartbeat(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:166)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:516)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:646)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:847)
>  at java.lang.Thread.run(Thread.java:748)Caused by: java.io.EOFException at 
> java.io.DataInputStream.readInt(DataInputStream.java:392) at 
> org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1850) at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1183) 
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079)}}
> {{2019-09-12 01:41:12,273 WARN ipc.Client: Address change detected. Old: 
> nmnode-0-1.nmnode-0-svc.test.svc.cluster.local/10.244.0.217:9000 New: 
> nmnode-0-1.nmnode-0-svc.test.svc.cluster.local/10.244.0.220:9000}}{{...}}
>  
> {{2019-09-12 01:41:12,482 INFO datanode.DataNode: Block pool 
> BP-930210564-10.244.0.216-1568249865477 (Datanode Uuid 
> 7673ef28-957a-439f-a721-d47a4a6adb7b) service to 
> nmnode-0-1.nmnode-0-svc.test.svc.cluster.local/10.244.0.217:9000 beginning 
> handshake with NN}}
> {{2019-09-12 01:41:12,534 INFO datanode.DataNode: Block pool Block pool 
> BP-930210564-10.244.0.216-1568249865477 (Datanode Uuid 
> 7673ef28-957a-439f-a721-d47a4a6adb7b) service to 
> nmnode-0-1.nmnode-0-svc.test.svc.cluster.local/10.244.0.217:9000 successfully 
> registered with NN}}
>  
> *NOTE*:  See how when the '{{Address change detected' shows up, the printout 
> correctly shows the old and the new address (}}{{10.244.0.220}}{{). 
> }}{{However when the registration with NN is complete, the old IP address is 
> still being printed (}}{{10.244.0.217) which is showing how cached copies of 
> the IP addresses linger on.}}{{}}
>  
> {{And the following is where the a

[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=323864&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323864
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 05/Oct/19 03:53
Start Date: 05/Oct/19 03:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1517: HDDS-2169. Avoid 
buffer copies while submitting client requests in Ratis
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-538613430
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 99 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 936 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1023 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 807 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 29 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 45 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 35 | hadoop-hdds in the patch failed. |
   | -1 | unit | 32 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 2610 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4562c496feab 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/13/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/1

[jira] [Commented] (HDDS-2241) Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the pipelines for a key

2019-10-04 Thread Aravindan Vijayan (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944968#comment-16944968
 ] 

Aravindan Vijayan commented on HDDS-2241:
-

Thank you [~aengineer]. I will post a patch. 

> Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the 
> pipelines for a key
> 
>
> Key: HDDS-2241
> URL: https://issues.apache.org/jira/browse/HDDS-2241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently, while looking up a key, the Ozone Manager gets the pipeline 
> information from SCM through an RPC for every block in the key. For large 
> files > 1GB, we may end up making a lot of RPC calls for this. This can be 
> optimized in a couple of ways
> * We can implement a batch getContainerWithPipeline API in SCM using which we 
> can get the pipeline info locations for all the blocks for a file. To keep 
> the number of containers passed in to SCM in a single call, we can have a 
> fixed container batch size on the OM side. _Here, Number of calls = 1 (or k 
> depending on batch size)_
> * Instead, a simpler change would be to have a map (method local) of 
> ContainerID -> Pipeline that we get from SCM so that we don't need to make 
> repeated calls to SCM for the same containerID for a key. _Here, Number of 
> calls = Number of unique containerIDs_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944966#comment-16944966
 ] 

Hadoop QA commented on HDFS-14509:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 37s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HDFS-14509 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982260/HDFS-14509-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 944be578c543 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f209722 |
| 

[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=323854&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323854
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 05/Oct/19 01:59
Start Date: 05/Oct/19 01:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1517: HDDS-2169. Avoid 
buffer copies while submitting client requests in Ratis
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-538606858
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 964 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1051 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 25 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 806 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2564 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1131b3349042 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/12/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall 

[jira] [Work logged] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?focusedWorklogId=323836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323836
 ]

ASF GitHub Bot logged work on HDDS-:


Author: ASF GitHub Bot
Created on: 05/Oct/19 01:21
Start Date: 05/Oct/19 01:21
Worklog Time Spent: 10m 
  Work Description: szetszwo commented on issue #1595: HDDS-. Add a 
method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
URL: https://github.com/apache/hadoop/pull/1595#issuecomment-538604170
 
 
   Sorry about that.  I thought we may ignore the checkstyle warnings like we 
are doing in Hadoop.
   
   Thanks @vivekratnavel for the quick fix in #1603 
(https://issues.apache.org/jira/browse/HDDS-2257)!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323836)
Time Spent: 2h 50m  (was: 2h 40m)

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=323835&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323835
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 05/Oct/19 01:14
Start Date: 05/Oct/19 01:14
Worklog Time Spent: 10m 
  Work Description: szetszwo commented on issue #1517: HDDS-2169. Avoid 
buffer copies while submitting client requests in Ratis
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-538603606
 
 
   > ...  I tried to run the tests in TestDataValidateWithUnsafeByteOperations 
and i see the following exception being thrown: ...
   
   Thanks @bshashikant.  That is a bug in  ChunkUtils
   ```
// ChunkUtils.writeData(..)
   int bufferSize = data.capacity();
   ```
   It should call data.remaining() instead of data.capacity().
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323835)
Time Spent: 3h 50m  (was: 3h 40m)

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2204) Avoid buffer coping in checksum verification

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2204?focusedWorklogId=323822&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323822
 ]

ASF GitHub Bot logged work on HDDS-2204:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:32
Start Date: 05/Oct/19 00:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1593: HDDS-2204. Avoid 
buffer coping in checksum verification.
URL: https://github.com/apache/hadoop/pull/1593#issuecomment-538599692
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 936 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1028 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2515 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1593 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3fd9e7c4dab4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1593/3/artifact/out/patch-

[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-10-04 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944933#comment-16944933
 ] 

Konstantin Shvachko commented on HDFS-14162:


+1 I think you fixed {{NameNodeProxies}} right.

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, 
> HDFS-14162-branch-2.004.patch, HDFS-14162.000.patch, HDFS-14162.001.patch, 
> HDFS-14162.002.patch, HDFS-14162.003.patch, HDFS-14162.004.patch, 
> ReflectionBenchmark.java, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=323819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323819
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:22
Start Date: 05/Oct/19 00:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-538598469
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 29 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 25 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 967 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2419 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1589 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 758eb245ea84 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-mvninst

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323818&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323818
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:17
Start Date: 05/Oct/19 00:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597832
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 50 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 943 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 31 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 31 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2387 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eb9f4e7930dc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f209722 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibra

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=323805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323805
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 05/Oct/19 00:13
Start Date: 05/Oct/19 00:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538597298
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 930 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1024 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 787 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 25 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2495 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1588 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 80553b9dfed3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a3cf54c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibr

[jira] [Commented] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944922#comment-16944922
 ] 

Tsz-wo Sze commented on HDDS-2257:
--

[~vivekratnavel], thanks for fixing the checkstyle warnings from HDDS-.

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944919#comment-16944919
 ] 

Hudson commented on HDDS-2257:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17485 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17485/])
HDDS-2257. Fix checkstyle issues in ChecksumByteBuffer (#1603) (bharat: rev 
f209722a19c5e18cd2371ace62aa20a753a8acc8)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java


> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-04 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14509:
---
Target Version/s: 3.1.3, 2.10.0, 3.3.0, 3.2.2  (was: 2.10.0, 3.3.0, 3.1.3, 
3.2.2)
  Status: Patch Available  (was: Open)

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-04 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944914#comment-16944914
 ] 

Konstantin Shvachko commented on HDFS-14509:


Updated [~John Smith]'s patch. Fixed some warnings, and added the second test.

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-04 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14509:
---
Attachment: HDFS-14509-002.patch

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2257:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?focusedWorklogId=323798&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323798
 ]

ASF GitHub Bot logged work on HDDS-2257:


Author: ASF GitHub Bot
Created on: 04/Oct/19 23:36
Start Date: 04/Oct/19 23:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1603: 
HDDS-2257. Fix checkstyle issues in ChecksumByteBuffer
URL: https://github.com/apache/hadoop/pull/1603
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323798)
Time Spent: 50m  (was: 40m)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?focusedWorklogId=323795&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323795
 ]

ASF GitHub Bot logged work on HDDS-2257:


Author: ASF GitHub Bot
Created on: 04/Oct/19 23:20
Start Date: 04/Oct/19 23:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1603: HDDS-2257. Fix 
checkstyle issues in ChecksumByteBuffer
URL: https://github.com/apache/hadoop/pull/1603#issuecomment-538589620
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1074 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1179 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 40 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 40 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 34 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 10 fixed = 0 total (was 10) |
   | +1 | checkstyle | 33 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 993 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 36 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 2893 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1603 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6f442b317358 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a3cf54c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1603/1/a

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323789&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323789
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 22:59
Start Date: 04/Oct/19 22:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538585997
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 47 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 942 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1027 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 28 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 793 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2495 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c4c3c00a9706 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a3cf54c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/9/artifact/ou

[jira] [Updated] (HDFS-13806) EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13806:
---
Fix Version/s: 3.1.4

> EC: No error message for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> ---
>
> Key: HDFS-13806
> URL: https://issues.apache.org/jira/browse/HDFS-13806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Fix For: 3.2.0, 3.1.4
>
> Attachments: HDFS-13806-01.patch, HDFS-13806-02.patch, 
> HDFS-13806-03.patch, HDFS-13806-04.patch, HDFS-13806-05.patch, 
> HDFS-13806-06.patch, No_error_unset_ec_policy.png
>
>
> No error message thrown for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> Steps :-
> --
>  * Create a Directory
>  - Set EC policy for the Directory
>  - Create a file in-side that Directory 
>  - Create a sub-directory inside the parent directory
>  - Check both the file and sub-directory inherit the EC policy from parent 
> directory
>  - Try to unset EC Policy for the file and check it will throw error as [ 
> Cannot unset an erasure coding policy on a file]
>  - Try to unset EC Policy for the sub-directory and check it will throw a 
> success message as [Unset erasure coding policy from ] 
>  instead of throwing the error message,which is wrong behavior
> Actual output :-
> No proper error message thrown for unsetting EC policy of the directory 
> inherits the erasure coding policy from an ancestor directory
>  A success message is displayed instead of throwing an error message
>  Expected output :-
>  
>  Proper error message should be thrown while trying to unset EC policy of the 
> directory inherits the erasure coding policy from an ancestor directory
>  like error message thrown while unsetting the EC policy of a file inherits 
> the erasure coding policy from an ancestor directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?focusedWorklogId=323776&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323776
 ]

ASF GitHub Bot logged work on HDDS-2257:


Author: ASF GitHub Bot
Created on: 04/Oct/19 22:31
Start Date: 04/Oct/19 22:31
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1603: HDDS-2257. Fix 
checkstyle issues in ChecksumByteBuffer
URL: https://github.com/apache/hadoop/pull/1603#issuecomment-538580780
 
 
   cc @bharatviswa504 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323776)
Time Spent: 0.5h  (was: 20m)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2257:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?focusedWorklogId=323775&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323775
 ]

ASF GitHub Bot logged work on HDDS-2257:


Author: ASF GitHub Bot
Created on: 04/Oct/19 22:31
Start Date: 04/Oct/19 22:31
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1603: HDDS-2257. Fix 
checkstyle issues in ChecksumByteBuffer
URL: https://github.com/apache/hadoop/pull/1603#issuecomment-538580729
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323775)
Time Spent: 20m  (was: 10m)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?focusedWorklogId=323774&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323774
 ]

ASF GitHub Bot logged work on HDDS-2257:


Author: ASF GitHub Bot
Created on: 04/Oct/19 22:31
Start Date: 04/Oct/19 22:31
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1603: 
HDDS-2257. Fix checkstyle issues in ChecksumByteBuffer
URL: https://github.com/apache/hadoop/pull/1603
 
 
   Fix these checkstyle issues 
   ```
   
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
   84: Inner assignments should be avoided.
   85: Inner assignments should be avoided.
   101: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   102: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   103: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   104: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   105: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   106: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   107: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   108: 'case' child has incorrect indentation level 8, expected level should 
be 6.
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323774)
Remaining Estimate: 0h
Time Spent: 10m

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2257:
-
Status: Patch Available  (was: Open)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2257:


Assignee: Vivek Ratnavel Subramanian

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  102: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  103: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  104: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  105: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  106: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  107: 'case' child has incorrect indentation level 8, expected 
> level should be 6.
>  108: 'case' child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2258.
--
Resolution: Duplicate

> Fix checkstyle issues introduced by HDDS-
> -
>
> Key: HDDS-2258
> URL: https://issues.apache.org/jira/browse/HDDS-2258
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2258 started by Vivek Ratnavel Subramanian.

> Fix checkstyle issues introduced by HDDS-
> -
>
> Key: HDDS-2258
> URL: https://issues.apache.org/jira/browse/HDDS-2258
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2258:


 Summary: Fix checkstyle issues introduced by HDDS-
 Key: HDDS-2258
 URL: https://issues.apache.org/jira/browse/HDDS-2258
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-10-04 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944871#comment-16944871
 ] 

Arpit Agarwal commented on HDFS-14305:
--

I think the right fix would be for NameNodes to push their range assignments 
into the edit log, so other NameNodes are aware of it and do not pick a 
conflicting range. Konstantin, this should also solve the hard-coded limit of 
64 that you objected to.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Konstantin Shvachko
>Priority: Major
>  Labels: multi-sbnn, release-blocker
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14305-007.patch, HDFS-14305-008.patch, 
> HDFS-14305.001.patch, HDFS-14305.002.patch, HDFS-14305.003.patch, 
> HDFS-14305.004.patch, HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-10-04 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944863#comment-16944863
 ] 

Arpit Agarwal edited comment on HDFS-14305 at 10/4/19 10:05 PM:


How do we guarantee that the ranges will not have an overlap across NameNodes? 
This is arguably worse than what we had before the original patch was reverted.

I am -1 on this new change and would like to see this reverted.


was (Author: arpitagarwal):
How do we guarantee that the ranges will not have an overlap across NameNodes? 
This is arguably worse than what we had before.

I am -1 on this change and would like to see this reverted.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Konstantin Shvachko
>Priority: Major
>  Labels: multi-sbnn, release-blocker
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14305-007.patch, HDFS-14305-008.patch, 
> HDFS-14305.001.patch, HDFS-14305.002.patch, HDFS-14305.003.patch, 
> HDFS-14305.004.patch, HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-10-04 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944863#comment-16944863
 ] 

Arpit Agarwal commented on HDFS-14305:
--

How do we guarantee that the ranges will not have an overlap across NameNodes? 
This is arguably worse than what we had before.

I am -1 on this change and would like to see this reverted.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Konstantin Shvachko
>Priority: Major
>  Labels: multi-sbnn, release-blocker
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14305-007.patch, HDFS-14305-008.patch, 
> HDFS-14305.001.patch, HDFS-14305.002.patch, HDFS-14305.003.patch, 
> HDFS-14305.004.patch, HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2164:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323751&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323751
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:55
Start Date: 04/Oct/19 21:55
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1528: HDDS-2181. 
Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538572646
 
 
   New changes LGTM. There are some acceptance tests still failing, can you 
check if they are related or not. And also can you rebase the PR to see if they 
are caused by this or not? (As from @elek comment, now all acceptance tests are 
passing in the trunk)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323751)
Time Spent: 5h 50m  (was: 5h 40m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944852#comment-16944852
 ] 

Hudson commented on HDDS-2250:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17484 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17484/])
HDDS-2250. Generated configs missing from ozone-filesystem-lib jars (elek: rev 
a3cf54ccdc3e59ca4a9a48d42f24ab96ec4c0583)
* (edit) hadoop-ozone/ozonefs-lib-current/pom.xml


> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?focusedWorklogId=323744&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323744
 ]

ASF GitHub Bot logged work on HDDS-2252:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:33
Start Date: 04/Oct/19 21:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1602: HDDS-2252. 
Enable gdpr robot test in daily build
URL: https://github.com/apache/hadoop/pull/1602#issuecomment-538567222
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 87 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 43 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 948 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 39 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2223 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1602 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 533275c9d8f3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8de4374 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/testReport/ |
   | Max. process+thread count | 362 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1602/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323744)
Time Spent: 50m  (was: 40m)

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As reported by [~ele

[jira] [Work logged] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2250?focusedWorklogId=323742&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323742
 ]

ASF GitHub Bot logged work on HDDS-2250:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:31
Start Date: 04/Oct/19 21:31
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1597: HDDS-2250. Generated 
configs missing from ozone-filesystem-lib jars
URL: https://github.com/apache/hadoop/pull/1597#issuecomment-538566511
 
 
   +1 thank you very much the fix. And the continuous work to resolve all the 
acceptance test failures.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323742)
Time Spent: 1h  (was: 50m)

> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?focusedWorklogId=323743&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323743
 ]

ASF GitHub Bot logged work on HDDS-2252:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:31
Start Date: 04/Oct/19 21:31
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1602: HDDS-2252. 
Enable gdpr robot test in daily build
URL: https://github.com/apache/hadoop/pull/1602#issuecomment-538566605
 
 
   Checkstyle issue is unrelated to patch. Filed HDDS-2257 to address it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323743)
Time Spent: 40m  (was: 0.5h)

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-2250:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=323740&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323740
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:30
Start Date: 04/Oct/19 21:30
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1542: HDDS-2140. 
Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542#issuecomment-538566236
 
 
   
   > But let's do it in a follow-up jira. To many issues in the queue. I will 
commit it right now...
   
   
   
   Addressed this in HDDS-2252
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323740)
Time Spent: 2.5h  (was: 2h 20m)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2250) Generated configs missing from ozone-filesystem-lib jars

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2250?focusedWorklogId=323741&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323741
 ]

ASF GitHub Bot logged work on HDDS-2250:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:30
Start Date: 04/Oct/19 21:30
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1597: HDDS-2250. 
Generated configs missing from ozone-filesystem-lib jars
URL: https://github.com/apache/hadoop/pull/1597
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323741)
Time Spent: 50m  (was: 40m)

> Generated configs missing from ozone-filesystem-lib jars
> 
>
> Key: HDDS-2250
> URL: https://issues.apache.org/jira/browse/HDDS-2250
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hadoop 3.1 and 3.2 acceptance tests started failing with HDDS-1720, which 
> added a new, annotated configuration class.
> The [change itself|https://github.com/apache/hadoop/pull/1538/files] looks 
> fine.  The problem is that the packaging process for {{ozone-filesystem-lib}} 
> jars keeps only 1 or 2 {{ozone-default-generated.xml}} files.  With the new 
> config in place, client configs are missing, so Ratis client gets evicted 
> immediately due to {{scm.container.client.idle.threshold}} = 0.  This results 
> in NPE:
> {code:title=https://elek.github.io/ozone-ci-q4/pr/pr-hdds-1720-trunk-rd9ht/acceptance/summary.html#s1-s5-t1-k2-k2}
> Running command 'hdfs dfs -put /opt/hadoop/NOTICE.txt 
> o3fs://bucket1.vol1/ozone-14607
> ...
> -put: Fatal internal error
> java.lang.NullPointerException: client is null
>   at java.util.Objects.requireNonNull(Objects.java:228)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:208)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:234)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:332)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2257:
---

 Summary: Fix checkstyle issues in ChecksumByteBuffer
 Key: HDDS-2257
 URL: https://issues.apache.org/jira/browse/HDDS-2257
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia


hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 84: Inner assignments should be avoided.
 85: Inner assignments should be avoided.
 101: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 102: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 103: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 104: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 105: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 106: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 107: 'case' child has incorrect indentation level 8, expected level 
should be 6.
 108: 'case' child has incorrect indentation level 8, expected level 
should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323735&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323735
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 21:11
Start Date: 04/Oct/19 21:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538561240
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 81 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 940 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1025 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 790 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2500 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e5f2745d8bf8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8de4374 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/8/artifact/ou

[jira] [Commented] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-04 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944814#comment-16944814
 ] 

Anu Engineer commented on HDDS-2247:


Perhaps we should always do GDPR, irrespective what the encryption status is. 
The issue is that we don't control the life time of the encryption keys at all.

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14893) TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT failing on branch-2

2019-10-04 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944812#comment-16944812
 ] 

Jim Brennan commented on HDFS-14893:


This is failing on this line:
{noformat}
assertTrue(logCapture.getOutput().contains("Assuming Standby state"));
{noformat}
But there is no code that generates that string.  Looks like this was caused by 
HDFS-14785, which changed the logging in getHAServiceState().
It appears to be fixed in trunk by HDFS-14245.
[~xkrogen] I don't know if the correct fix is to pull back HDFS-14245 or to 
just fix this test in branch-2.
cc: [~jhung]


> TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT failing on 
> branch-2
> --
>
> Key: HDFS-14893
> URL: https://issues.apache.org/jira/browse/HDFS-14893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: Jim Brennan
>Priority: Minor
>
> TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT() is failing 
> on branch-2
> {noformat}
> [INFO] Running 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
> [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.994 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
> [ERROR] 
> testObserverReadProxyProviderWithDT(org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA)
>   Time elapsed: 0.648 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT(TestDelegationTokensWithHA.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?focusedWorklogId=323683&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323683
 ]

ASF GitHub Bot logged work on HDDS-2252:


Author: ASF GitHub Bot
Created on: 04/Oct/19 20:37
Start Date: 04/Oct/19 20:37
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1602: HDDS-2252. Enable 
gdpr robot test in daily build
URL: https://github.com/apache/hadoop/pull/1602#issuecomment-538551274
 
 
   /label ozone
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323683)
Time Spent: 20m  (was: 10m)

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14893) TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT failing on branch-2

2019-10-04 Thread Jim Brennan (Jira)
Jim Brennan created HDFS-14893:
--

 Summary: 
TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT failing on 
branch-2
 Key: HDFS-14893
 URL: https://issues.apache.org/jira/browse/HDFS-14893
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.10.0
Reporter: Jim Brennan


TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT() is failing on 
branch-2
{noformat}
[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
[ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.994 s 
<<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
[ERROR] 
testObserverReadProxyProviderWithDT(org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA)
  Time elapsed: 0.648 s  <<< FAILURE!
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT(TestDelegationTokensWithHA.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?focusedWorklogId=323684&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323684
 ]

ASF GitHub Bot logged work on HDDS-2252:


Author: ASF GitHub Bot
Created on: 04/Oct/19 20:37
Start Date: 04/Oct/19 20:37
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1602: HDDS-2252. Enable 
gdpr robot test in daily build
URL: https://github.com/apache/hadoop/pull/1602#issuecomment-538551306
 
 
   +1, pending Jenkins.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323684)
Time Spent: 0.5h  (was: 20m)

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?focusedWorklogId=323682&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323682
 ]

ASF GitHub Bot logged work on HDDS-2252:


Author: ASF GitHub Bot
Created on: 04/Oct/19 20:36
Start Date: 04/Oct/19 20:36
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1602: 
HDDS-2252. Enable gdpr robot test in daily build
URL: https://github.com/apache/hadoop/pull/1602
 
 
   **What changes were proposed in this pull request?**
   Updated test.sh script to include gdpr.robot test so that it gets triggered 
as part of daily build.
   
   **Link to Apache JIRA**
   https://issues.apache.org/jira/browse/HDDS-2252
   
   **How was this patch tested?**
   Started Docker
   cd ~/ozone-SNAPSHOT/compose/ozone
   ./test.sh
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323682)
Remaining Estimate: 0h
Time Spent: 10m

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323681&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323681
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 04/Oct/19 20:36
Start Date: 04/Oct/19 20:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1600: HDDS-2239. Fix 
TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-538551062
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 119 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 57 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 46 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1044 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 824 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 2647 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1600 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a09fa9b23a80 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8de4374 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/2/artifact/out/patch-mvninstall-hado

[jira] [Updated] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2252:
-
Labels: pull-request-available  (was: )

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2252) Enable gdpr robot test in daily build

2019-10-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2252:

Status: Patch Available  (was: Open)

> Enable gdpr robot test in daily build
> -
>
> Key: HDDS-2252
> URL: https://issues.apache.org/jira/browse/HDDS-2252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As reported by [~elek] in 
> https://github.com/apache/hadoop/pull/1542#pullrequestreview-297424033
> "One thing what I found, I think it's not yet enabled in the daily builds.
> I think in the hadoop-ozone/dist/src/main/compose/ozone/test.sh we need a new 
> line:
> execute_robot_test gdpr.robot"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944799#comment-16944799
 ] 

Hudson commented on HDDS-2158:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17483 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17483/])
HDDS-2158. Fixing Json Injection Issue in JsonUtils. (#1486) (github: rev 
8de4374427e77d5d9b79a710ca9225f749556eda)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/AddAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/AddAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/SetAclVolumeHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/AddAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/SetAclKeyHandler.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/ListSubcommand.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/SetAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/RemoveAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/GetAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/GetTokenHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/RemoveAclKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/GetAclVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/RemoveAclBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/JsonUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java


> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2158?focusedWorklogId=323676&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323676
 ]

ASF GitHub Bot logged work on HDDS-2158:


Author: ASF GitHub Bot
Created on: 04/Oct/19 19:52
Start Date: 04/Oct/19 19:52
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1486: 
HDDS-2158. Fixing Json Injection Issue in JsonUtils.
URL: https://github.com/apache/hadoop/pull/1486
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323676)
Time Spent: 4h  (was: 3h 50m)

> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-04 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-2158.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2158?focusedWorklogId=323675&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323675
 ]

ASF GitHub Bot logged work on HDDS-2158:


Author: ASF GitHub Bot
Created on: 04/Oct/19 19:52
Start Date: 04/Oct/19 19:52
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1486: HDDS-2158. 
Fixing Json Injection Issue in JsonUtils.
URL: https://github.com/apache/hadoop/pull/1486#issuecomment-538537962
 
 
   Remaining failures are not related to this patch.
   Committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323675)
Time Spent: 3h 50m  (was: 3h 40m)

> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944788#comment-16944788
 ] 

Hudson commented on HDDS-2164:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17482 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17482/])
HDDS-2164 : om.db.checkpoints is getting filling up fast. (#1536) (aengineer: 
rev f3eaa84f9d2db47741fae1394e182f3ea60a1331)
* (edit) 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RDBCheckpointManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServlet.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/RocksDBCheckpoint.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMDbCheckpointServlet.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconUtils.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestOzoneManagerServiceProviderImpl.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/TestOmUtils.java


> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323670&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323670
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 19:44
Start Date: 04/Oct/19 19:44
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1536: HDDS-2164 
: om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323670)
Time Spent: 2h 20m  (was: 2h 10m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2256) Checkstyle issues in CheckSumByteBuffer.java

2019-10-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2256:
--

 Summary: Checkstyle issues in CheckSumByteBuffer.java
 Key: HDDS-2256
 URL: https://issues.apache.org/jira/browse/HDDS-2256
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


HDDS-, added some checkstyle failures in CheckSumByteBuffer.java. This JIRA 
is to track and fix those checkstyle issues.

{code}
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 84: Inner assignments should be avoided.
 85: Inner assignments should be avoided.
 101: child has incorrect indentation level 8, expected level should be 6.
 102: child has incorrect indentation level 8, expected level should be 6.
 103:  child has incorrect indentation level 8, expected level should be 6.
 104:  child has incorrect indentation level 8, expected level should be 6.
 105: child has incorrect indentation level 8, expected level should be 6.
 106:  child has incorrect indentation level 8, expected level should be 6.
 107:  child has incorrect indentation level 8, expected level should be 6.
 108: child has incorrect indentation level 8, expected level should be 6.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323660&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323660
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 04/Oct/19 19:24
Start Date: 04/Oct/19 19:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1600: HDDS-2239. Fix 
TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-538529702
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1066 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1167 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 37 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 43 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 33 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 871 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 34 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 2747 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1600 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 40e4cb25f01b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 10bdc59 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1600/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/

[jira] [Updated] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14497:
---
Fix Version/s: 3.2.2
   3.1.4

> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14497-addendum.001.patch, HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-10-04 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944767#comment-16944767
 ] 

Wei-Chiu Chuang commented on HDFS-2470:
---

Thanks! Just in time!

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, 
> HDFS-2470.09.patch, HDFS-2470.branch-3.1.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14890) Setting permissions on name directory fails on non posix compliant filesystems

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14890:
---
Fix Version/s: 3.1.4

> Setting permissions on name directory fails on non posix compliant filesystems
> --
>
> Key: HDFS-14890
> URL: https://issues.apache.org/jira/browse/HDFS-14890
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: Windows 10.
>Reporter: hirik
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14890.01.patch
>
>
> Hi,
> HDFS NameNode and JournalNode are not starting in Windows machine. Found 
> below related exception in logs. 
> Caused by: java.lang.UnsupportedOperationExceptionCaused by: 
> java.lang.UnsupportedOperationException
> at java.base/java.nio.file.Files.setPosixFilePermissions(Files.java:2155)
> at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
> at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
> at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:422)
> at 
> com.slog.dfs.hdfs.nn.NameNodeServiceImpl.delayedStart(NameNodeServiceImpl.java:147)
>  
> Code changes related to this issue: 
> [https://github.com/apache/hadoop/commit/07e3cf952eac9e47e7bd5e195b0f9fc28c468313#diff-1a56e69d50f21b059637cfcbf1d23f11]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1949) Missing or error-prone test cleanup

2019-10-04 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944763#comment-16944763
 ] 

Siyao Meng commented on HDDS-1949:
--

Thanks [~adoroszlai]!

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1949) Missing or error-prone test cleanup

2019-10-04 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944756#comment-16944756
 ] 

Attila Doroszlai commented on HDDS-1949:


Hi [~smeng], thanks for taking the time to find the culprit and sorry for the 
trouble. When I created the original PR, TestOzoneFsHAURLs did not exist, and I 
didn't notice during rebase that it was broken. Just before I saw your comment, 
I submitted a [fix|https://github.com/apache/hadoop/pull/1600] for 
TestOzoneFsHAURLs.

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-10-04 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944755#comment-16944755
 ] 

Eric Yang commented on HDFS-2470:
-

[~weichiu] You might need to backport HDFS-14890, if you intend to apply this 
patch to branch-3.1.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, 
> HDFS-2470.09.patch, HDFS-2470.branch-3.1.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-10-04 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14245:
---
Fix Version/s: 3.2.2
   3.1.4

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-10-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944751#comment-16944751
 ] 

Erik Krogen commented on HDFS-14245:


Backported to 3.2 and 3.1, but this depends on HDFS-14162, so I'll wait to put 
together a branch-2 patch until that backport is committed.

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-10-04 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944750#comment-16944750
 ] 

Wei-Chiu Chuang commented on HDFS-2470:
---

Pushed to branch-3.1 with trivial conflicts. Attached  
[^HDFS-2470.branch-3.1.patch]  for posterity.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, 
> HDFS-2470.09.patch, HDFS-2470.branch-3.1.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-2470:
--
Attachment: HDFS-2470.branch-3.1.patch

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, 
> HDFS-2470.09.patch, HDFS-2470.branch-3.1.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1949) Missing or error-prone test cleanup

2019-10-04 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944749#comment-16944749
 ] 

Siyao Meng commented on HDDS-1949:
--

[~adoroszlai] This commit triggers NPE in integration test TestOzoneFsHAURLs 
when it's cleaning up / shutting down the mini HA cluster: 
https://github.com/apache/hadoop/pull/1365/commits / 
https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1949-46ffz/integration/summary.md
Any idea why? Would you take a quick look?

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323653&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323653
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 04/Oct/19 18:38
Start Date: 04/Oct/19 18:38
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1600: HDDS-2239. Fix 
TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600#issuecomment-538514512
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323653)
Time Spent: 20m  (was: 10m)

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2239:
-
Labels: pull-request-available  (was: )

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?focusedWorklogId=323652&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323652
 ]

ASF GitHub Bot logged work on HDDS-2239:


Author: ASF GitHub Bot
Created on: 04/Oct/19 18:37
Start Date: 04/Oct/19 18:37
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1600: HDDS-2239. 
Fix TestOzoneFsHAUrls
URL: https://github.com/apache/hadoop/pull/1600
 
 
   ## What changes were proposed in this pull request?
   
   Make sure context classloader is restored to avoid NPE during shutdown.
   
   https://issues.apache.org/jira/browse/HDDS-2239
   
   ## How was this patch tested?
   
   Ran `TestOzoneFsHAUrls` in IDE.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323652)
Remaining Estimate: 0h
Time Spent: 10m

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323649
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 18:29
Start Date: 04/Oct/19 18:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538511398
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | -1 | mvninstall | 34 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 931 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1017 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   | -0 | patch | 1045 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 51 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 786 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2538 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1536 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 13dcca6818b6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aa24add |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/

[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-2470:
--
Fix Version/s: 3.1.4

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2237) KeyDeletingService throws NPE if it's started too early

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2237:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> KeyDeletingService throws NPE if it's started too early
> ---
>
> Key: HDDS-2237
> URL: https://issues.apache.org/jira/browse/HDDS-2237
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: om
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> 1. OzoneManager starts KeyManager
> 2. KeyManager starts KeyDeletingService
> 3. KeyDeletingService uses OzoneManager.isLeader()
> 4. OzoneManager.isLeader() uses omRatisServer
> 5. omRatisServer can be null (bumm)
>  
> Now the initialization order in OzoneManager:
>  
> new KeymanagerServer() *Includes start()*
> omRatisServer initialization
> start() (includes KeyManager.start())
>  
> The solution seems to be easy: start the key manager only from the 
> OzoneManager.start() and not from the OzoneManager.instantiateServices()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2239:
--

Assignee: Attila Doroszlai

> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2239 started by Attila Doroszlai.
--
> Fix TestOzoneFsHAUrls
> -
>
> Key: HDDS-2239
> URL: https://issues.apache.org/jira/browse/HDDS-2239
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Attila Doroszlai
>Priority: Major
>
> [https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14162) Balancer should work with ObserverNode

2019-10-04 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14162:
---
Attachment: HDFS-14162-branch-2.004.patch

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, 
> HDFS-14162-branch-2.004.patch, HDFS-14162.000.patch, HDFS-14162.001.patch, 
> HDFS-14162.002.patch, HDFS-14162.003.patch, HDFS-14162.004.patch, 
> ReflectionBenchmark.java, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-10-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944735#comment-16944735
 ] 

Erik Krogen commented on HDFS-14162:


Backported to branch-3.2 and branch-3.1. Attaching branch-2 patch since there 
were conflicts in {{NameNodeProxies}} -- the main thing was that in 
branch-3.1+, the RPC timeout is specified separately by each proxy type, 
whereas in branch-2 the default RPC timeout is used for all of them. 
Additionally, the new interface method on {{HAProxyFactory}} couldn't be marked 
{{default}} since that's a Java 8 feature. No class was making use of the 
default. [~shv], can you help review the branch-2 backport?

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, 
> HDFS-14162-branch-2.004.patch, HDFS-14162.000.patch, HDFS-14162.001.patch, 
> HDFS-14162.002.patch, HDFS-14162.003.patch, HDFS-14162.004.patch, 
> ReflectionBenchmark.java, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2255) Improve Acl Handler Messages

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2255:
-
Labels: newbie  (was: )

> Improve Acl Handler Messages
> 
>
> Key: HDDS-2255
> URL: https://issues.apache.org/jira/browse/HDDS-2255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om
>Reporter: Hanisha Koneru
>Priority: Minor
>  Labels: newbie
>
> In Add/Remove/Set Acl Key/Bucket/Volume Handlers, we print a message about 
> whether the operation was successful or not. If we are trying to add an ACL 
> which is already existing, we convey the message that the operation failed. 
> It would be better if the message conveyed more clearly why the operation 
> failed i.e. the ACL already exists. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14162) Balancer should work with ObserverNode

2019-10-04 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14162:
---
Fix Version/s: 3.2.2
   3.1.4

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, 
> HDFS-14162.001.patch, HDFS-14162.002.patch, HDFS-14162.003.patch, 
> HDFS-14162.004.patch, ReflectionBenchmark.java, 
> testBalancerWithObserver-3.patch, testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2233) Remove ByteStringHelper and refactor the code to the place where it used

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2233?focusedWorklogId=323637&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323637
 ]

ASF GitHub Bot logged work on HDDS-2233:


Author: ASF GitHub Bot
Created on: 04/Oct/19 18:09
Start Date: 04/Oct/19 18:09
Worklog Time Spent: 10m 
  Work Description: fapifta commented on issue #1596: HDDS-2233 - Remove 
ByteStringHelper and refactor the code to the place where it used
URL: https://github.com/apache/hadoop/pull/1596#issuecomment-538504664
 
 
   Acceptance test failures seems to be related to an internally examined 
similar failure in XCeiverClientGrpc, where the relevant stack trace is:
   Caused by: java.lang.NullPointerException
   at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:387)
   at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithRetry(XceiverClientGrpc.java:285)
   at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithTraceIDAndRetry(XceiverClientGrpc.java:251)
   at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommand(XceiverClientGrpc.java:234)
   at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.readChunk(ContainerProtocolCalls.java:245)
   at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.readChunk(ChunkInputStream.java:335)
   at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.readChunkFromContainer(ChunkInputStream.java:307)
   at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.prepareRead(ChunkInputStream.java:249)
   at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.read(ChunkInputStream.java:144)
   at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:239)
   at 
org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
   at 
org.apache.hadoop.fs.ozone.OzoneFSInputStream.read(OzoneFSInputStream.java:52)
   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
   at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:112)
   at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:555)
   
   From the Integration test failures 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient failure might be 
related, but there in the output before the error we can see that the client 
wanted replication factor of 3, while the cluster had only 1 DN. 
   
   So issues seem to be unrelated, however I can not run these tests locally as 
easy as they are timing out in my local machine.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323637)
Time Spent: 40m  (was: 0.5h)

> Remove ByteStringHelper and refactor the code to the place where it used
> 
>
> Key: HDDS-2233
> URL: https://issues.apache.org/jira/browse/HDDS-2233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> See HDDS-2203 where there is a race condition reported by me.
> Later in the discussion we agreed that it is better to refactor the code and 
> remove the class completely for now, and that would also resolve the race 
> condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14162) Balancer should work with ObserverNode

2019-10-04 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reopened HDFS-14162:


Re-opening for backport to older branches which should have been done from the 
start.

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, 
> HDFS-14162.001.patch, HDFS-14162.002.patch, HDFS-14162.003.patch, 
> HDFS-14162.004.patch, ReflectionBenchmark.java, 
> testBalancerWithObserver-3.patch, testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323621&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323621
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 17:50
Start Date: 04/Oct/19 17:50
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538497245
 
 
   > @avijayanhwx findbugs violations are related to this change, can you take 
a look at it?
   
   My latest patch should fix the findbugs issues. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323621)
Time Spent: 2h  (was: 1h 50m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=323615&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323615
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 04/Oct/19 17:48
Start Date: 04/Oct/19 17:48
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538496672
 
 
   /retest
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323615)
Time Spent: 1h 50m  (was: 1h 40m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=323612&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323612
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 04/Oct/19 17:45
Start Date: 04/Oct/19 17:45
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1517: HDDS-2169. Avoid 
buffer copies while submitting client requests in Ratis
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-538495734
 
 
   Thanks @szetszwo for updating the patch. I tried to run the tests in 
TestDataValidateWithUnsafeByteOperations and i see the following exception 
being thrown:
   `2019-10-04 23:13:25,556 
[ce18dfb1-da4d-401f-9614-bec32477b5f3@group-0099BCD205B6-SegmentedRaftLogWorker]
 INFO  segmented.SegmentedRaftLogWorker 
(SegmentedRaftLogWorker.java:execute(574)) - 
ce18dfb1-da4d-401f-9614-bec32477b5f3@group-0099BCD205B6-SegmentedRaftLogWorker: 
created new log segment 
/Users/sbanerjee/github_hadoop/hadoop-ozone/tools/target/test-dir/MiniOzoneClusterImpl-cd3ca672-68cd-49fd-bdb3-a7fc97d18c23/datanode-1/data/ratis/ace05abb-b740-47f7-95d4-0099bcd205b6/current/log_inprogress_0
   2019-10-04 23:13:25,557 
[ee7f2721-1de4-4264-8bf3-d340e83f8791@group-0099BCD205B6-SegmentedRaftLogWorker]
 INFO  segmented.SegmentedRaftLogWorker 
(SegmentedRaftLogWorker.java:execute(574)) - 
ee7f2721-1de4-4264-8bf3-d340e83f8791@group-0099BCD205B6-SegmentedRaftLogWorker: 
created new log segment 
/Users/sbanerjee/github_hadoop/hadoop-ozone/tools/target/test-dir/MiniOzoneClusterImpl-cd3ca672-68cd-49fd-bdb3-a7fc97d18c23/datanode-2/data/ratis/ace05abb-b740-47f7-95d4-0099bcd205b6/current/log_inprogress_0
   2019-10-04 23:13:25,874 [pool-56-thread-1] ERROR impl.ChunkManagerImpl 
(ChunkUtils.java:writeData(89)) - data array does not match the length 
specified. DataLen: 1048576 Byte Array: 1048749
   2019-10-04 23:13:25,874 [pool-56-thread-1] INFO  keyvalue.KeyValueHandler 
(ContainerUtils.java:logAndReturnError(146)) - Operation: WriteChunk : Trace 
ID: cab5af5eafbad5ed:6a87e816d7e0ce20:e3ff42a900c31035:0 : Message: data array 
does not match the length specified. DataLen: 1048576 Byte Array: 1048749 : 
Result: INVALID_WRITE_SIZE
   2019-10-04 23:13:25,881 
[EventQueue-IncrementalContainerReportForIncrementalContainerReportHandler] 
WARN  container.IncrementalContainerReportHandler 
(AbstractContainerReportHandler.java:updateContainerState(143)) - Container #1 
is in OPEN state, but the datanode eb79af53-823f-485d-8402-ff71443cc79f{ip: 
192.168.0.64, host: 192.168.0.64, networkLocation: /default-rack, certSerialId: 
null} reports an UNHEALTHY replica.
   23:13:25.886 [pool-56-thread-1] ERROR DNAudit - user=null | ip=null | 
op=WRITE_CHUNK {blockData=conID: 1 locID: 102905348118937600 bcsId: 0} | 
ret=FAILURE
   java.lang.Exception: data array does not match the length specified. 
DataLen: 1048576 Byte Array: 1048749
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:330)
 ~[classes/:?]
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:150)
 ~[classes/:?]
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:411)
 ~[classes/:?]
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:419)
 ~[classes/:?]
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$1(ContainerStateMachine.java:454)
 ~[classes/:?]
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
 [?:1.8.0_181]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
   2019-10-04 23:13:25,896 [pool-56-thread-1] ERROR ratis.ContainerStateMachine 
(ContainerStateMachine.java:lambda$handleWriteChunk$2(474)) - 
group-0099BCD205B6: writeChunk writeStateMachineData failed: 
blockIdcontainerID: 1
   localID: 102905348118937600
   blockCommitSequenceId: 0
logIndex 1 chunkName 102905348118937600_chunk_1 Error message: data array 
does not match the length specified. DataLen: 1048576 Byte Array: 1048749 
Container Result: INVALID_WRITE_SIZE
   `
   
   Can you please check?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue 

[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=323613&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323613
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 04/Oct/19 17:45
Start Date: 04/Oct/19 17:45
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538495750
 
 
   Thanks @vivekratnavel  for the update. The latest change in 
OzoneNativeAuthorizer LGTM. Can you take look at the CI failures, some of them 
seem related to this change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323613)
Time Spent: 5.5h  (was: 5h 20m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2251) Add an option to customize unit.sh and integration.sh parameters

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2251?focusedWorklogId=323610&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323610
 ]

ASF GitHub Bot logged work on HDDS-2251:


Author: ASF GitHub Bot
Created on: 04/Oct/19 17:42
Start Date: 04/Oct/19 17:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1598: HDDS-2251. Add 
an option to customize unit.sh and integration.sh parameters
URL: https://github.com/apache/hadoop/pull/1598#issuecomment-538494612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 84 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 895 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 43 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 846 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2193 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1598 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 2f550eed1242 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f16651 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/testReport/ |
   | Max. process+thread count | 305 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1598/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323610)
Time Spent: 20m  (was: 10m)

> Add an option to customize unit.sh and integration.sh parameters
> 
>
> Key: HDDS-2251
> URL: https://issues.apache.org/jira/browse/HDDS-2251
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0

[jira] [Updated] (HDFS-14892) Close the output stream if createWrappedOutputStream() fails

2019-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14892:
---
Component/s: encryption

> Close the output stream if createWrappedOutputStream() fails
> 
>
> Key: HDFS-14892
> URL: https://issues.apache.org/jira/browse/HDFS-14892
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Reporter: Kihwal Lee
>Priority: Major
>
> create() in an encryption zone is a two step process by the client. First, a 
> regular FSOutputStream is created and then it is wrapped with an encrypted 
> stream.  When there is a system issue or a KMS ACL-based denial, the second 
> phase will fail. If the client terminates right away, the shutdown hook 
> closes the output stream opened in the first phase.  But if the client lives 
> on, the output stream will leak.
> Datanode's WebHdfsHandler, DFSClient, DistributedFileSystem, Hdfs 
> (FileContext) and RpcProgramNfs3 do this.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-10-04 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944706#comment-16944706
 ] 

Íñigo Goiri commented on HDFS-6524:
---

+1 on  [^HDFS-6524.006.patch].

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.006.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >