[jira] [Commented] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870038#comment-16870038
 ] 

Wei-Chiu Chuang commented on HDFS-13371:


I stand corrected. Didn't realize this is for RBF, which I don't have much 
involvement. The patch passed all tests so if the commenters above like to push 
this forward please feel free to do so.

> NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication 
> between 2.7 and 3.2
> -
>
> Key: HDFS-13371
> URL: https://issues.apache.org/jira/browse/HDFS-13371
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Minor
> Attachments: HDFS-13371.000.patch
>
>
> KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 
> services, it cannot find the key provider URI and triggers a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14047) [libhdfs++] Fix hdfsGetLastExceptionRootCause bug in test_libhdfs_threaded.c

2019-06-21 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870037#comment-16870037
 ] 

Daniel Templeton commented on HDFS-14047:
-

I can't at the moment; no desktop/laptop.  @weichiu, could you do the
honors?




> [libhdfs++] Fix hdfsGetLastExceptionRootCause bug in test_libhdfs_threaded.c
> 
>
> Key: HDFS-14047
> URL: https://issues.apache.org/jira/browse/HDFS-14047
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: libhdfs, native
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14047.000.patch, HDFS-14047.001.patch
>
>
> Currently the native client CI tests break deterministically with these 
> errors:
> Libhdfs
> 1 - test_test_libhdfs_threaded_hdfs_static (Failed)
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> [exec] hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> [exec] (unable to get root cause for java.io.FileNotFoundException)
> [exec] (unable to get stack trace for java.io.FileNotFoundException)
>  
> Libhdfs++
> 34 - test_libhdfs_threaded_hdfspp_test_shim_static (Failed)
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> [exec] hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> [exec] (unable to get root cause for java.io.FileNotFoundException)
> [exec] (unable to get stack trace for java.io.FileNotFoundException)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13694) Making md5 computing being in parallel with image loading

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870036#comment-16870036
 ] 

Wei-Chiu Chuang commented on HDFS-13694:


[~leosun08] would you help clean up the findbugs/checkstyle/whitespace warnings?

Other than that, the patch looks good to me.

> Making md5 computing being in parallel with image loading
> -
>
> Key: HDFS-13694
> URL: https://issues.apache.org/jira/browse/HDFS-13694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-13694-001.patch
>
>
> During namenode image loading, it firstly compute the md5 and then load the 
> image. Actually these two steps can be in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14247) Repeat adding node description into network topology

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870025#comment-16870025
 ] 

Wei-Chiu Chuang edited comment on HDFS-14247 at 6/22/19 2:42 AM:
-

[~elgoiri] you commented on HDFS-10865 that the same issue existed since Hadoop 
2.7
Shall we cherry pick the fix into lower branches too? I suspect this has 
performance impact.


was (Author: jojochuang):
[~elgoiri] you commented on HDFS-10865 that the same issue existed since Hadoop 
2.7
Shall we cherry pick the fix into lower branches too?

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14247) Repeat adding node description into network topology

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870025#comment-16870025
 ] 

Wei-Chiu Chuang commented on HDFS-14247:


[~elgoiri] you commented on HDFS-10865 that the same issue existed since Hadoop 
2.7
Shall we cherry pick the fix into lower branches too?

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10865) Datanodemanager adds nodes twice to NetworkTopology

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10865:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

This one duplicates HDFS-14247. Resolve this one as a dup.

> Datanodemanager adds nodes twice to NetworkTopology
> ---
>
> Key: HDFS-10865
> URL: https://issues.apache.org/jira/browse/HDFS-10865
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-10865.000.patch
>
>
> {{DatanodeManager}} tries to add datanodes to the {{NetworkTopology}} twice 
> in {{registerDatanode()}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12061) Add TraceScope for several DFSClient EC operations

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12061:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

htrace is dead. Won't fix.

> Add TraceScope for several DFSClient EC operations
> --
>
> Key: HDFS-12061
> URL: https://issues.apache.org/jira/browse/HDFS-12061
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-12061.001.patch
>
>
> A number of DFSClient EC operations, including addErasureCodingPolicies, 
> removeErasureCodingPolicy, enableErasureCodingPolicy, 
> disableErasureCodingPolicy does not have TraceScope similar to this:
> {code}
> try (TraceScope ignored = tracer.newScope("getErasureCodingCodecs")) {
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265160=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265160
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 02:16
Start Date: 22/Jun/19 02:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1006: HDDS-1723. 
Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-504618709
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 495 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 899 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 539 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for patch |
   | +1 | mvninstall | 447 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | +1 | javac | 262 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | +1 | findbugs | 527 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1155 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6335 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 31a32ca36b95 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 371452e |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/5/testReport/ |
   | Max. process+thread count | 4699 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265160)
Time Spent: 1h  (was: 50m)

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This 

[jira] [Commented] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870014#comment-16870014
 ] 

Wei-Chiu Chuang commented on HDFS-14586:


Looks like this is similar to HDFS-13529
[~hexiaoqiao] fyi

> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265159
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 02:07
Start Date: 22/Jun/19 02:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1006: HDDS-1723. 
Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-504618090
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 472 | trunk passed |
   | +1 | compile | 240 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 818 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 321 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 442 | the patch passed |
   | +1 | compile | 257 | the patch passed |
   | +1 | javac | 257 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 647 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 517 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 157 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1000 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 5892 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2d1d889c932f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 371452e |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/4/testReport/ |
   | Max. process+thread count | 5269 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265159)
Time Spent: 50m  (was: 40m)

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  

[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265158=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265158
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 02:02
Start Date: 22/Jun/19 02:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1006: HDDS-1723. 
Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-504617781
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 484 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 891 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 642 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 446 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | -1 | findbugs | 314 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 264 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1305 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6637 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  org.apache.hadoop.ozone.om.lock.OzoneManagerLock.lambda$new$0() 
invokes inefficient new Short(short) constructor; use Short.valueOf(short) 
instead  At OzoneManagerLock.java:constructor; use Short.valueOf(short) instead 
 At OzoneManagerLock.java:[line 62] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 80bd016d9872 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 371452e |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/testReport/ |
   | Max. process+thread count | 5017 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please 

[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265157
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 01:58
Start Date: 22/Jun/19 01:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1006: HDDS-1723. 
Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-504617537
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1430 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 86 | Maven dependency ordering for branch |
   | +1 | mvninstall | 564 | trunk passed |
   | +1 | compile | 249 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 877 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | trunk passed |
   | 0 | spotbugs | 221 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 84 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 14 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in the patch failed. |
   | -1 | compile | 32 | hadoop-ozone in the patch failed. |
   | -1 | javac | 32 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 65 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 600  line(s) with tabs. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 86 | hadoop-ozone generated 1 new + 9 unchanged - 0 fixed = 
10 total (was 9) |
   | -1 | findbugs | 36 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 194 | hadoop-hdds in the patch failed. |
   | -1 | unit | 39 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5469 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b7be4208fdc5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 371452e |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/whitespace-tabs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265148
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 01:41
Start Date: 22/Jun/19 01:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1006: HDDS-1723. 
Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-504616423
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 564 | trunk passed |
   | +1 | compile | 251 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 806 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 493 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 437 | the patch passed |
   | +1 | compile | 266 | the patch passed |
   | +1 | javac | 266 | the patch passed |
   | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | -1 | findbugs | 318 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 238 | hadoop-hdds in the patch passed. |
   | -1 | unit | 138 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 5168 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  org.apache.hadoop.ozone.om.lock.OzoneManagerLock.lambda$new$0() 
invokes inefficient new Short(short) constructor; use Short.valueOf(short) 
instead  At OzoneManagerLock.java:constructor; use Short.valueOf(short) instead 
 At OzoneManagerLock.java:[line 62] |
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bf1587e66275 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 371452e |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/testReport/ |
   | Max. process+thread count | 492 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265148)
Time Spent: 20m  (was: 10m)

> Create new OzoneManagerLock class
> 

[jira] [Commented] (HDFS-14074) DataNode runs async disk checks maybe throws NullPointerException, and DataNode failed to register to NameSpace.

2019-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870003#comment-16870003
 ] 

Hudson commented on HDFS-14074:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16808 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16808/])
HDFS-14074. DataNode runs async disk checks maybe throws (weichiu: rev 
645d67bc4f4e29d10ef810386c89e6a7c8c61862)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/ThrottledAsyncChecker.java


> DataNode runs async disk checks  maybe  throws NullPointerException, and 
> DataNode failed to register to NameSpace.
> --
>
> Key: HDFS-14074
> URL: https://issues.apache.org/jira/browse/HDFS-14074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0, 3.0.0
> Environment: hadoop-2.7.3, hadoop-2.8.0
>Reporter: guangyi lu
>Assignee: guangyi lu
>Priority: Major
>  Labels: HDFS, HDFS-4
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14074-latest.patch, HDFS-14074.patch, 
> WechatIMG83.jpeg
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In ThrottledAsyncChecker class,it members of the completedChecks is 
> WeakHashMap, its definition is as follows:
>       this.completedChecks = new WeakHashMap<>();
> and one of its uses is as follows in schedule method:
>      if (completedChecks.containsKey(target)) {  
>       // here may be happen garbage collection,and result may be null.
>        final LastCheckResult result = completedChecks.get(target);         
>  
>        final long msSinceLastCheck = timer.monotonicNow() - 
> result.completedAt;    
>        
> }
> after  "completedChecks.containsKey(target)",  may be happen garbage 
> collection,  and result may be null.
> the solution is:
> this.completedChecks = new ReferenceMap(1, 1);
> or
>  this.completedChecks = new HashMap<>();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12487) FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do the callers

2019-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870002#comment-16870002
 ] 

Hudson commented on HDFS-12487:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16807 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16807/])
HDFS-12487. FsDatasetSpi.isValidBlock() lacks null pointer check inside 
(weichiu: rev 1524e2e6c52aba966cbbf1d8025ba165688ab9bb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java


> FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do 
> the callers
> --
>
> Key: HDFS-12487
> URL: https://issues.apache.org/jira/browse/HDFS-12487
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover, diskbalancer
>Affects Versions: 3.0.0
> Environment: CentOS 6.8 x64
> CPU:4 core
> Memory:16GB
> Hadoop: Release 3.0.0-alpha4
>Reporter: liumi
>Assignee: liumi
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12487.002.patch, HDFS-12487.003.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> BlockIteratorImpl.nextBlock() will look for the blocks in the source volume, 
> if there are no blocks any more, it will return null up to 
> DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy() 
> will check whether it's a valid block.
> When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't 
> check the null pointer! In fact, we firstly need to check whether it's null 
> or not, or exception will occur.
> This bug is hard to find, because the DiskBalancer hardly copy all the data 
> of one volume to others. Even if some times we may copy all the data of one 
> volume to other volumes, when the bug occurs, the copy process has already 
> done.
> However, when we try to copy all the data of two or more volumes to other 
> volumes in more than one step, the thread will be shut down, which is caused 
> by the bug above.
> The bug can fixed by two ways:
> 1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer
> 2)Check the null pointer inside the implementation of 
> FsDatasetSpi.isValidBlock()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1687#comment-1687
 ] 

Hadoop QA commented on HDFS-14586:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-14586 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14586 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972321/HDFS-14586.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27035/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14074) DataNode runs async disk checks maybe throws NullPointerException, and DataNode failed to register to NameSpace.

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14074:
---
  Resolution: Fixed
   Fix Version/s: (was: 3.0.0)
  (was: 2.7.3)
  (was: 2.8.0)
  3.1.3
  3.2.1
  3.3.0
Target Version/s: 2.7.3, 2.8.0  (was: 2.8.0, 2.7.3)
  Status: Resolved  (was: Patch Available)

+1
[~luguangyi] added you to Hadoop contributor list and assigned the jira to you.
Pushed the last patch to trunk, branch-3.2 and branch-3.1

Thanks [~luguangyi] for the patch and [~arp] for the review

> DataNode runs async disk checks  maybe  throws NullPointerException, and 
> DataNode failed to register to NameSpace.
> --
>
> Key: HDFS-14074
> URL: https://issues.apache.org/jira/browse/HDFS-14074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0, 3.0.0
> Environment: hadoop-2.7.3, hadoop-2.8.0
>Reporter: guangyi lu
>Assignee: guangyi lu
>Priority: Major
>  Labels: HDFS, HDFS-4
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14074-latest.patch, HDFS-14074.patch, 
> WechatIMG83.jpeg
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In ThrottledAsyncChecker class,it members of the completedChecks is 
> WeakHashMap, its definition is as follows:
>       this.completedChecks = new WeakHashMap<>();
> and one of its uses is as follows in schedule method:
>      if (completedChecks.containsKey(target)) {  
>       // here may be happen garbage collection,and result may be null.
>        final LastCheckResult result = completedChecks.get(target);         
>  
>        final long msSinceLastCheck = timer.monotonicNow() - 
> result.completedAt;    
>        
> }
> after  "completedChecks.containsKey(target)",  may be happen garbage 
> collection,  and result may be null.
> the solution is:
> this.completedChecks = new ReferenceMap(1, 1);
> or
>  this.completedChecks = new HashMap<>();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14074) DataNode runs async disk checks maybe throws NullPointerException, and DataNode failed to register to NameSpace.

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14074:
--

Assignee: guangyi lu

> DataNode runs async disk checks  maybe  throws NullPointerException, and 
> DataNode failed to register to NameSpace.
> --
>
> Key: HDFS-14074
> URL: https://issues.apache.org/jira/browse/HDFS-14074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0, 3.0.0
> Environment: hadoop-2.7.3, hadoop-2.8.0
>Reporter: guangyi lu
>Assignee: guangyi lu
>Priority: Major
>  Labels: HDFS, HDFS-4
> Fix For: 2.8.0, 2.7.3, 3.0.0
>
> Attachments: HDFS-14074-latest.patch, HDFS-14074.patch, 
> WechatIMG83.jpeg
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> In ThrottledAsyncChecker class,it members of the completedChecks is 
> WeakHashMap, its definition is as follows:
>       this.completedChecks = new WeakHashMap<>();
> and one of its uses is as follows in schedule method:
>      if (completedChecks.containsKey(target)) {  
>       // here may be happen garbage collection,and result may be null.
>        final LastCheckResult result = completedChecks.get(target);         
>  
>        final long msSinceLastCheck = timer.monotonicNow() - 
> result.completedAt;    
>        
> }
> after  "completedChecks.containsKey(target)",  may be happen garbage 
> collection,  and result may be null.
> the solution is:
> this.completedChecks = new ReferenceMap(1, 1);
> or
>  this.completedChecks = new HashMap<>();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14586:
---
Status: Patch Available  (was: Open)

[~huyongfa] thanks for reporting the issue. I added you to Hadoop contributor 
list and assigned the jira to you.

Submitted the patch for precommit.

> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14586) Trash missing delete the folder which near timeout checkpoint

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14586:
--

Assignee: hu yongfa

> Trash missing delete the folder which near timeout checkpoint
> -
>
> Key: HDFS-14586
> URL: https://issues.apache.org/jira/browse/HDFS-14586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hu yongfa
>Assignee: hu yongfa
>Priority: Major
> Attachments: HDFS-14586.001.patch
>
>
> when trash timeout checkpoint coming, trash will delete the old folder first, 
> then create a new checkpoint folder.
> as the delete action may spend a long time, such as 2 minutes, so the new 
> checkpoint folder created late.
> at the next trash timeout checkpoint, trash will skip delete the new 
> checkpoint folder, because the new checkpoint folder is 
> less than a checkpoint interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12487) FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do the callers

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12487:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-3.2 and branch-3.1
Thanks [~liumihust] for the contribution and [~anu] for review!

> FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do 
> the callers
> --
>
> Key: HDFS-12487
> URL: https://issues.apache.org/jira/browse/HDFS-12487
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover, diskbalancer
>Affects Versions: 3.0.0
> Environment: CentOS 6.8 x64
> CPU:4 core
> Memory:16GB
> Hadoop: Release 3.0.0-alpha4
>Reporter: liumi
>Assignee: liumi
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-12487.002.patch, HDFS-12487.003.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> BlockIteratorImpl.nextBlock() will look for the blocks in the source volume, 
> if there are no blocks any more, it will return null up to 
> DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy() 
> will check whether it's a valid block.
> When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't 
> check the null pointer! In fact, we firstly need to check whether it's null 
> or not, or exception will occur.
> This bug is hard to find, because the DiskBalancer hardly copy all the data 
> of one volume to others. Even if some times we may copy all the data of one 
> volume to other volumes, when the bug occurs, the copy process has already 
> done.
> However, when we try to copy all the data of two or more volumes to other 
> volumes in more than one step, the thread will be shut down, which is caused 
> by the bug above.
> The bug can fixed by two ways:
> 1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer
> 2)Check the null pointer inside the implementation of 
> FsDatasetSpi.isValidBlock()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14047) [libhdfs++] Fix hdfsGetLastExceptionRootCause bug in test_libhdfs_threaded.c

2019-06-21 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869986#comment-16869986
 ] 

Chen Liang commented on HDFS-14047:
---

[~anatoli.shein], [~templedf] do you also plan to backport this fix to 
branch-3.2? Seems branch-3.2 builds are still getting this error (e.g. 
HDFS-14573 builds).

> [libhdfs++] Fix hdfsGetLastExceptionRootCause bug in test_libhdfs_threaded.c
> 
>
> Key: HDFS-14047
> URL: https://issues.apache.org/jira/browse/HDFS-14047
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: libhdfs, native
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14047.000.patch, HDFS-14047.001.patch
>
>
> Currently the native client CI tests break deterministically with these 
> errors:
> Libhdfs
> 1 - test_test_libhdfs_threaded_hdfs_static (Failed)
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> [exec] hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> [exec] (unable to get root cause for java.io.FileNotFoundException)
> [exec] (unable to get stack trace for java.io.FileNotFoundException)
>  
> Libhdfs++
> 34 - test_libhdfs_threaded_hdfspp_test_shim_static (Failed)
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:180
>  with NULL return return value (errno: 2): expected substring: File does not 
> exist
> [exec] TEST_ERROR: failed on 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c:336
>  with return code -1 (errno: 2): got nonzero from doTestHdfsOperations(ti, 
> fs, )
> [exec] hdfsOpenFile(/tlhData0001/file1): 
> FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;)
>  error:
> [exec] (unable to get root cause for java.io.FileNotFoundException)
> [exec] (unable to get stack trace for java.io.FileNotFoundException)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Description: 
This Jira is to use bit manipulation, instead of hashmap in OzoneManager lock 
logic. And also this Jira follows the locking order based on the document 
attached to HDDS-1672 jira.

This Jira is created based on [~anu] comment during review of HDDS-1672.

Not a suggestion for this patch. But more of a question, should we just 
maintain a bitset here, and just flip that bit up and down to see if the lock 
is held. Or we can just maintain 32 bit integer, and we can easily find if a 
lock is held by Xoring with the correct mask. I feel that might be super 
efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
let us not do that in this patch.

 

This Jira will add new class, integration of this new class into code will be 
done in a new jira. 

Clean up of old code also will be done in new jira.

  was:
This Jira is to user bit manipulation, instead of hashmap in OzoneManager lock 
logic. And also this Jira follows the locking order based on the document 
attached to HDDS-1672 jira.

This Jira is created based on [~anu] comment during review of HDDS-1672.

Not a suggestion for this patch. But more of a question, should we just 
maintain a bitset here, and just flip that bit up and down to see if the lock 
is held. Or we can just maintain 32 bit integer, and we can easily find if a 
lock is held by Xoring with the correct mask. I feel that might be super 
efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
let us not do that in this patch.

 

This Jira will add new class, integration of this new class into code will be 
done in a new jira. 

Clean up of old code also will be done in new jira.


> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to use bit manipulation, instead of hashmap in OzoneManager lock 
> logic. And also this Jira follows the locking order based on the document 
> attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Fix Version/s: (was: 0.4.1)

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Target Version/s: 0.5.0

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Component/s: Ozone Manager

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Fix Version/s: 0.4.1
   Status: Patch Available  (was: Open)

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Issue Type: Improvement  (was: Bug)

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?focusedWorklogId=265134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265134
 ]

ASF GitHub Bot logged work on HDDS-1723:


Author: ASF GitHub Bot
Created on: 22/Jun/19 00:10
Start Date: 22/Jun/19 00:10
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1006: 
HDDS-1723. Create new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006
 
 
   Thank You @anuengineer for offline discussion and help during the code of 
using Short and bit manipulation, instead of BitSet.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265134)
Time Spent: 10m
Remaining Estimate: 0h

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1723:
-
Labels: pull-request-available  (was: )

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is to user bit manipulation, instead of hashmap in OzoneManager 
> lock logic. And also this Jira follows the locking order based on the 
> document attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1723) Create new OzoneManagerLock class

2019-06-21 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1723:


 Summary: Create new OzoneManagerLock class
 Key: HDDS-1723
 URL: https://issues.apache.org/jira/browse/HDDS-1723
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to user bit manipulation, instead of hashmap in OzoneManager lock 
logic.

This Jira is created based on [~anu] comment during review of HDDS-1672.

Not a suggestion for this patch. But more of a question, should we just 
maintain a bitset here, and just flip that bit up and down to see if the lock 
is held. Or we can just maintain 32 bit integer, and we can easily find if a 
lock is held by Xoring with the correct mask. I feel that might be super 
efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
let us not do that in this patch.

 

This Jira will add new class, integration of this new class into code will be 
done in a new jira. 

Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869961#comment-16869961
 ] 

Wei-Chiu Chuang commented on HDFS-14034:


This is similar to HDFS-8631, and I think we can move it under HDFS-8629 to 
manage all missing WebHDFS Rest APIs better.

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265106
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296375190
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -0,0 +1,375 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handles CreateKey request.
+ */
+
+public class OMKeyCreateRequest extends OMClientRequest
+implements OMKeyRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCreateRequest.class);
+
+  public OMKeyCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateKeyRequest createKeyRequest = getOmRequest().getCreateKeyRequest();
+Preconditions.checkNotNull(createKeyRequest);
+
+KeyArgs keyArgs = createKeyRequest.getKeyArgs();
+
+// We cannot allocate block for multipart upload part when
+// createMultipartKey is called, as we will not know type and factor with
+// which initiateMultipartUpload has started for this key. When
+// allocateBlock call happen's we shall know type and factor, as we set
+// the type and factor read from multipart table, and set the KeyInfo in
+// validateAndUpdateCache and return to the client. TODO: See if we can fix
+//  this.
+
+CreateKeyRequest.Builder newCreateKeyRequest = null;
+KeyArgs.Builder newKeyArgs = null;
+if (!keyArgs.getIsMultipartKey()) {
+
+  long scmBlockSize = ozoneManager.getScmBlockSize();
+
+ 

[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265105=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265105
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296419490
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyRenameResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.RenameKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.RenameKeyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+
+/**
+ * Handles rename key request.
+ */
+public class OMKeyRenameRequest extends OMClientRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyRenameRequest.class);
+
+  public OMKeyRenameRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+RenameKeyRequest renameKeyRequest = getOmRequest().getRenameKeyRequest();
+Preconditions.checkNotNull(renameKeyRequest);
+
+// Set modification time.
+KeyArgs.Builder newKeyArgs = renameKeyRequest.getKeyArgs().toBuilder()
+.setModificationTime(Time.now());
+
+return getOmRequest().toBuilder()
+.setRenameKeyRequest(renameKeyRequest.toBuilder()
+.setKeyArgs(newKeyArgs)).setUserInfo(getUserInfo()).build();
+
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+RenameKeyRequest renameKeyRequest = getOmRequest().getRenameKeyRequest();
+Preconditions.checkNotNull(renameKeyRequest);
+
+OzoneManagerProtocolProtos.KeyArgs renameKeyArgs =
+renameKeyRequest.getKeyArgs();
+
+String volumeName = renameKeyArgs.getVolumeName();
+String bucketName = renameKeyArgs.getBucketName();
+String fromKeyName = renameKeyArgs.getKeyName();
+String toKeyName = renameKeyRequest.getToKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyRenames();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(renameKeyArgs);
+
+OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+

[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265102
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296368626
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -0,0 +1,375 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handles CreateKey request.
+ */
+
+public class OMKeyCreateRequest extends OMClientRequest
+implements OMKeyRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCreateRequest.class);
+
+  public OMKeyCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateKeyRequest createKeyRequest = getOmRequest().getCreateKeyRequest();
+Preconditions.checkNotNull(createKeyRequest);
+
+KeyArgs keyArgs = createKeyRequest.getKeyArgs();
+
+// We cannot allocate block for multipart upload part when
+// createMultipartKey is called, as we will not know type and factor with
+// which initiateMultipartUpload has started for this key. When
+// allocateBlock call happen's we shall know type and factor, as we set
+// the type and factor read from multipart table, and set the KeyInfo in
+// validateAndUpdateCache and return to the client. TODO: See if we can fix
+//  this.
+
 
 Review comment:
   As discussed offline, can you please add "we do not call allocateBlock in 
openKey for multipart upload."
 

This is 

[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265103
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296418613
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -0,0 +1,165 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteKeyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.KEY_NOT_FOUND;
+
+/**
+ * Handles DeleteKey request.
+ */
+public class OMKeyDeleteRequest extends OMClientRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequest.class);
+
 
 Review comment:
   OMKeyDeleteRequest.class
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265103)
Time Spent: 4h 10m  (was: 4h)

> Implement Key Write Requests to use Cache and DoubleBuffer
> --
>
> Key: HDDS-1638
> URL: https://issues.apache.org/jira/browse/HDDS-1638
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Implement Key write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement key operations, and HA/Non-HA 
> will have a different code path, but once all requests are implemented will 
> have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265107
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296418990
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
 ##
 @@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension
+.EncryptedKeyVersion;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.AllocatedBlock;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.ipc.Server;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ScmClient;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.BucketEncryptionKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.security.OzoneBlockTokenSecretManager;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.GeneralSecurityException;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.EnumSet;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.util.Time.monotonicNow;
+
+/**
+ * Interface for key write requests.
+ */
+public interface OMKeyRequest {
+
+  Logger LOG = LoggerFactory.getLogger(OMKeyRequest.class);
+
+  /**
+   * This methods avoids multiple rpc calls to SCM by allocating multiple 
blocks
+   * in one rpc call.
+   * @throws IOException
+   */
+  @SuppressWarnings("parameternumber")
+  default List< OmKeyLocationInfo > allocateBlock(ScmClient scmClient,
+  OzoneBlockTokenSecretManager secretManager,
+  HddsProtos.ReplicationType replicationType,
+  HddsProtos.ReplicationFactor replicationFactor,
+  ExcludeList excludeList, long requestedSize, long scmBlockSize,
+  int preallocateBlocksMax, boolean grpcBlockTokenEnabled, String omID)
+  throws IOException {
+
+int numBlocks = Math.min((int) ((requestedSize - 1) / scmBlockSize + 1),
+preallocateBlocksMax);
+
+List locationInfos = new ArrayList<>(numBlocks);
+String remoteUser = getRemoteUser().getShortUserName();
+List allocatedBlocks;
+try {
+  allocatedBlocks = scmClient.getBlockClient()
+  .allocateBlock(scmBlockSize, numBlocks, replicationType,
+  replicationFactor, omID, excludeList);
+} catch (SCMException ex) {
+  if (ex.getResult()
+  .equals(SCMException.ResultCodes.SAFE_MODE_EXCEPTION)) {
+throw new OMException(ex.getMessage(),
+OMException.ResultCodes.SCM_IN_SAFE_MODE);
+  }
+  throw ex;
+}
+for (AllocatedBlock allocatedBlock : allocatedBlocks) {
+  OmKeyLocationInfo.Builder builder = new OmKeyLocationInfo.Builder()
+  .setBlockID(new BlockID(allocatedBlock.getBlockID()))
+  

[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=265104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265104
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 23:37
Start Date: 21/Jun/19 23:37
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296420349
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
 ##
 @@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import java.io.IOException;
+
+/**
+ * Response for AllocateBlock request.
+ */
+public class OMAllocateBlockResponse extends OMClientResponse {
+
+  private final OmKeyInfo omKeyInfo;
+  private final long clientID;
+
+  public OMAllocateBlockResponse(OmKeyInfo omKeyInfo,
+  long clientID, OMResponse omResponse) {
+super(omResponse);
+this.omKeyInfo = omKeyInfo;
+this.clientID = clientID;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+// For OmResponse with failure, this should do nothing. This method is
+// not called in failure scenario in OM code.
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
 
 Review comment:
   This check is performed in all OMClientResponse classes. We can move this 
check to a calling function.
   Again, this can be done in a different Jira if thats preferred.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265104)
Time Spent: 4h 20m  (was: 4h 10m)

> Implement Key Write Requests to use Cache and DoubleBuffer
> --
>
> Key: HDDS-1638
> URL: https://issues.apache.org/jira/browse/HDDS-1638
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Implement Key write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement key operations, and HA/Non-HA 
> will have a different code path, but once all requests are implemented will 
> have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-06-21 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12345:
---
Attachment: HDFS-12345.009.patch

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch, HDFS-12345.003.patch, HDFS-12345.004.patch, 
> HDFS-12345.005.patch, HDFS-12345.006.patch, HDFS-12345.007.patch, 
> HDFS-12345.008.patch, HDFS-12345.009.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-06-21 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869951#comment-16869951
 ] 

Erik Krogen commented on HDFS-12345:


Attached v009 fixing a few of the last remaining checkstyle / shellcheck 
warnings

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch, HDFS-12345.003.patch, HDFS-12345.004.patch, 
> HDFS-12345.005.patch, HDFS-12345.006.patch, HDFS-12345.007.patch, 
> HDFS-12345.008.patch, HDFS-12345.009.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869947#comment-16869947
 ] 

Hadoop QA commented on HDFS-12345:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-tools hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 17s{color} | {color:orange} root: The patch generated 19 new + 0 unchanged - 
0 fixed = 19 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
3s{color} | {color:red} The patch generated 5 new + 1 unchanged - 0 fixed = 6 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
19s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
21s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-tools/hadoop-dynamometer hadoop-tools hadoop-dist . 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m  6s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 0s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}279m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed 

[jira] [Commented] (HDFS-14573) Backport Standby Read to branch-3

2019-06-21 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869945#comment-16869945
 ] 

Chen Liang commented on HDFS-14573:
---

Looks like the CTEST failure has been fixed by HDFS-14047, which hasn't been 
backported to older 3.x branches. 

> Backport Standby Read to branch-3
> -
>
> Key: HDFS-14573
> URL: https://issues.apache.org/jira/browse/HDFS-14573
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14573-branch-3.0.001.patch, 
> HDFS-14573-branch-3.1.001.patch, HDFS-14573-branch-3.2.001.patch, 
> HDFS-14573-branch-3.2.002.patch, HDFS-14573-branch-3.2.003.patch, 
> HDFS-14573-branch-3.2.004.patch
>
>
> This Jira tracks backporting the feature consistent read from standby 
> (HDFS-12943) to branch-3.x, including 3.0, 3.1, 3.2. This is required for 
> backporting to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14290) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869946#comment-16869946
 ] 

Wei-Chiu Chuang commented on HDFS-14290:


It's very difficult for me to verify this patch.
Is there any way I can reproduce this issue? For example, a certain DataNode 
webhdfs URL that I can connect to it and which causes this error? It doesn't 
need to be a UT.

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-14290
> URL: https://issues.apache.org/jira/browse/HDFS-14290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14290.000.patch, webhdfs show.png
>
>
> The issue is there is no HttpRequestDecoder in InboundHandler of netty,  
> appear unexpected message type when read message.
>   
> !webhdfs show.png!   
> DEBUG org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Proxy 
> failed. Cause: 
>  com.xiaomi.infra.thirdparty.io.netty.handler.codec.EncoderException: 
> java.lang.IllegalStateException: unexpected message type: 
> PooledUnsafeDirectByteBuf
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:106)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.CombinedChannelDuplexHandler.write(CombinedChannelDuplexHandler.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:816)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.doFlush(ChunkedWriteHandler.java:304)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:137)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:831)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1051)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:300)
>  at 
> org.apache.hadoop.hdfs.server.datanode.web.SimpleHttpProxyHandler$Forwarder.channelRead(SimpleHttpProxyHandler.java:80)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
>  at 
> com.xiaomi.infra.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> 

[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869944#comment-16869944
 ] 

Hadoop QA commented on HDDS-1554:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
1s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 30 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
18s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 2 new + 4 unchanged - 0 fixed = 6 
total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
15s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
8s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 17s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | 

[jira] [Work logged] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?focusedWorklogId=265050=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265050
 ]

ASF GitHub Bot logged work on HDDS-1719:


Author: ASF GitHub Bot
Created on: 21/Jun/19 22:51
Start Date: 21/Jun/19 22:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1005: HDDS-1719 : 
Increase ratis log segment size to 1MB.
URL: https://github.com/apache/hadoop/pull/1005#issuecomment-504597124
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 735 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 928 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 331 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 539 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 468 | the patch passed |
   | +1 | compile | 260 | the patch passed |
   | +1 | javac | 260 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 699 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 524 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 173 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1593 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6938 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1005 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 9303dc1db644 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8194a11 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/2/testReport/ |
   | Max. process+thread count | 5408 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265050)
Time Spent: 40m  (was: 0.5h)

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: 

[jira] [Work logged] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?focusedWorklogId=265040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265040
 ]

ASF GitHub Bot logged work on HDDS-1719:


Author: ASF GitHub Bot
Created on: 21/Jun/19 22:35
Start Date: 21/Jun/19 22:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1005: HDDS-1719 : 
Increase ratis log segment size to 1MB.
URL: https://github.com/apache/hadoop/pull/1005#issuecomment-504594014
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 472 | trunk passed |
   | +1 | compile | 264 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 302 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 494 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 423 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 620 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 260 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1196 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6031 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1005 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 498ebd5ab9f3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8194a11 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/1/testReport/ |
   | Max. process+thread count | 4998 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1005/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265040)
Time Spent: 0.5h  (was: 20m)

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: https://issues.apache.org/jira/browse/HDDS-1719
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  

[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14595:
---
Affects Version/s: (was: 3.2.1)
   3.2.0

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1, 3.2.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14595:
---
Affects Version/s: 3.2.1
   3.3.0
 Target Version/s: 3.0.4, 3.3.0, 3.2.1, 3.1.3

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1, 3.3.0, 3.2.1
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1718?focusedWorklogId=265019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-265019
 ]

ASF GitHub Bot logged work on HDDS-1718:


Author: ASF GitHub Bot
Created on: 21/Jun/19 21:59
Start Date: 21/Jun/19 21:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1004: HDDS-1718 : 
Increase Ratis Leader election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004#issuecomment-504585961
 
 
   Unit test failures seem related to the change.  I will look into them.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 265019)
Time Spent: 40m  (was: 0.5h)

> Increase Ratis Leader election timeout default to 10 seconds.
> -
>
> Key: HDDS-1718
> URL: https://issues.apache.org/jira/browse/HDDS-1718
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
> triggers a leader election thereby disturbing the steady state of the system 
> for more time than the GC pause delay.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869897#comment-16869897
 ] 

Eric Yang commented on HDDS-1554:
-

Patch 6 fixed a logic error that [~elek] pointed out for waiting for safe mode, 
and reduce the number of max retries to 3 because there is now 10 retries built 
in ipc.Client.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-06-21 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.006.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869880#comment-16869880
 ] 

Hadoop QA commented on HDFS-12979:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 456 unchanged - 6 fixed = 456 total (was 462) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12979 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972468/HDFS-12979.013.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bcef4785e2aa 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8194a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27032/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27032/testReport/ |
| Max. process+thread count | 2863 (vs. ulimit of 1) |
| modules 

[jira] [Work logged] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1718?focusedWorklogId=264985=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264985
 ]

ASF GitHub Bot logged work on HDDS-1718:


Author: ASF GitHub Bot
Created on: 21/Jun/19 21:04
Start Date: 21/Jun/19 21:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1004: HDDS-1718 : 
Increase Ratis Leader election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004#issuecomment-504572324
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 504 | trunk passed |
   | +1 | compile | 246 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 906 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 314 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 431 | the patch passed |
   | +1 | compile | 250 | the patch passed |
   | +1 | javac | 250 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 535 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 170 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1356 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6324 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1004/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1004 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 740eab6f21af 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8194a11 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1004/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1004/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1004/1/testReport/ |
   | Max. process+thread count | 4887 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1004/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264985)
Time Spent: 0.5h  (was: 20m)

> Increase Ratis Leader election timeout default to 10 seconds.
> 

[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=264984=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264984
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 21:02
Start Date: 21/Jun/19 21:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296393441
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -0,0 +1,375 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handles CreateKey request.
+ */
+
+public class OMKeyCreateRequest extends OMClientRequest
+implements OMKeyRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCreateRequest.class);
+
+  public OMKeyCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateKeyRequest createKeyRequest = getOmRequest().getCreateKeyRequest();
+Preconditions.checkNotNull(createKeyRequest);
+
+KeyArgs keyArgs = createKeyRequest.getKeyArgs();
+
+// We cannot allocate block for multipart upload part when
+// createMultipartKey is called, as we will not know type and factor with
+// which initiateMultipartUpload has started for this key. When
+// allocateBlock call happen's we shall know type and factor, as we set
+// the type and factor read from multipart table, and set the KeyInfo in
+// validateAndUpdateCache and return to the client. TODO: See if we can fix
+//  this.
+
+CreateKeyRequest.Builder newCreateKeyRequest = null;
+KeyArgs.Builder newKeyArgs = null;
+if (!keyArgs.getIsMultipartKey()) {
+
+  long scmBlockSize = ozoneManager.getScmBlockSize();
+

[jira] [Work started] (HDDS-1691) RDBTable#isExist should use Rocksdb#keyMayExist

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1691 started by Aravindan Vijayan.
---
> RDBTable#isExist should use Rocksdb#keyMayExist
> ---
>
> Key: HDDS-1691
> URL: https://issues.apache.org/jira/browse/HDDS-1691
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>
> RDBTable#isExist can use Rocksdb#keyMayExist, this avoids the cost of reading 
> the value for the key.
> Please refer, 
> https://github.com/facebook/rocksdb/blob/7a8d7358bb40b13a06c2c6adc62e80295d89ed05/java/src/main/java/org/rocksdb/RocksDB.java#L2184



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1719:

Status: Patch Available  (was: Open)

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: https://issues.apache.org/jira/browse/HDDS-1719
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted ratis logs were rolled 1-2 times every second. This adds 
> unnecessary overhead to the pipeline thereby affecting write throughput. 
> Increasing the size of the log segment to 1MB will decrease the overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2019-06-21 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869867#comment-16869867
 ] 

Anu Engineer commented on HDFS-13893:
-

[~jojochuang] Thanks for the commit. [~ljain] Thanks for the contribution. 
[~arp] Thank you for the reviews. 

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-13893.001.patch, HDFS-13893.002.patch, 
> HDFS-13893.003.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?focusedWorklogId=264977=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264977
 ]

ASF GitHub Bot logged work on HDDS-1719:


Author: ASF GitHub Bot
Created on: 21/Jun/19 20:54
Start Date: 21/Jun/19 20:54
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1005: HDDS-1719 
: Increase ratis log segment size to 1MB.
URL: https://github.com/apache/hadoop/pull/1005
 
 
   While testing out ozone with long running clients which continuously write 
data, it was noted ratis logs were rolled 1-2 times every second. This adds 
unnecessary overhead to the pipeline thereby affecting write throughput. 
Increasing the size of the log segment to 1MB will decrease the overhead.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264977)
Time Spent: 10m
Remaining Estimate: 0h

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: https://issues.apache.org/jira/browse/HDDS-1719
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted ratis logs were rolled 1-2 times every second. This adds 
> unnecessary overhead to the pipeline thereby affecting write throughput. 
> Increasing the size of the log segment to 1MB will decrease the overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1719:
-
Labels: pull-request-available  (was: )

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: https://issues.apache.org/jira/browse/HDDS-1719
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted ratis logs were rolled 1-2 times every second. This adds 
> unnecessary overhead to the pipeline thereby affecting write throughput. 
> Increasing the size of the log segment to 1MB will decrease the overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-06-21 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869859#comment-16869859
 ] 

Siyao Meng commented on HDFS-14564:
---

+1 on the latest PR.

> Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
> -
>
> Key: HDFS-14564
> URL: https://issues.apache.org/jira/browse/HDFS-14564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> Splitting this out from HDFS-14478
> The {{PositionedReadable#readFully}} APIs have existed for a while, but have 
> never been exposed via libhdfs.
> HDFS-3246 added a new interface called {{ByteBufferPositionedReadable}} that 
> provides a {{ByteBuffer}} version of {{PositionedReadable}}, but it does not 
> contain a {{readFully}} method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-06-21 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HDFS-14595:
-

Assignee: Siyao Meng

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1716) Smoketest results are generated with an internal user

2019-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869855#comment-16869855
 ] 

Eric Yang commented on HDDS-1716:
-

[~elek] Thank you for the patch, it doesn't work fully for me.

When trying the smoke test, the output location is reported /tmp/smoketest.  
The actual output location is in OZONE_HOME/compose/ozone/result.

{code}
[eyang@localhost ozone-0.5.0-SNAPSHOT]$ ./compose/ozone/test.sh 
Removing network ozone_default
WARNING: Network ozone_default not found.
Creating network "ozone_default" with the default driver
Creating ozone_scm_1  ... done
Creating ozone_om_1   ... done
Creating ozone_datanode_1 ... done
Creating ozone_datanode_2 ... done
Creating ozone_datanode_3 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==
ozone-auditparser 
==
ozone-auditparser.Auditparser :: Smoketest ozone cluster startup  
==
Initiating freon to generate data | FAIL |
255 != 0
--
Testing audit parser  | PASS |
--
ozone-auditparser.Auditparser :: Smoketest ozone cluster startup  | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
==
ozone-auditparser | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
==
Output:  /tmp/smoketest/ozone/result/robot-ozone-ozone-auditparser-om.xml
==
ozone-basic :: Smoketest ozone cluster startup
==
Check webui static resources  | PASS |
--
Start freon testing   | FAIL |
255 != 0
--
ozone-basic :: Smoketest ozone cluster startup| FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
==
Output:  /tmp/smoketest/ozone/result/robot-ozone-ozone-basic-scm.xml
{code}

The files is owned by the user who initiated the test run, but files are 
generated at different location than reported. The files do not follow umask.  
When set umask to 0027, the xml files continue to allow world readable.

Another concern is Rebot is a GPL 3.0 licensed software.  Apache projects can 
not include GPL v3 software.  By calling Rebot in the test script, may bring 
trouble.  I recommend to avoid.

Is there a way to default test reports to 
${project.build.directory}/test-results to prevent  contaminated 
ozone-0.5.0-SNAPSHOT directory with test debris?

> Smoketest results are generated with an internal user
> -
>
> Key: HDDS-1716
> URL: https://issues.apache.org/jira/browse/HDDS-1716
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [~eyang] reported the problem in HDDS-1609 that the smoketest results are 
> generated a user (the user inside the docker container) which can be 
> different from the host user.
> There is a minimal risk that the test results can be deleted/corrupted by an 
> other users if the current user is different from uid=1000
> I opened this issue because [~eyang] said me during an offline discussion 
> that HDDS-1609 is a more complex issue and not only about the ownership of 
> the test results.
> I suggest to handle the two problems in different way. With this patch, the 
> permission of the test result files can be fixed easily.
> In HDDS-1609 we can discuss about general security problems and try to find 
> generic solution for them.
> Steps to reproduce _this_ problem:
>  # Use a user which is different from uid=1000
>  # Create a new ozone build 

[jira] [Commented] (HDFS-14594) Replace all Http(s)URLConnection

2019-06-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869856#comment-16869856
 ] 

Wei-Chiu Chuang commented on HDFS-14594:


[~daryn] Daryn, isn't this fixed by your change in HADOOP-15813?
[~Sebastien Barnoud] do you have a benchmark that shows how bad it is?

> Replace all Http(s)URLConnection
> 
>
> Key: HDFS-14594
> URL: https://issues.apache.org/jira/browse/HDFS-14594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.3
> Environment: HDP 2.6.5 and HDP 2.6.2
> HotSpot 8u192 and 8u92
> Linux Redhat 3.10.0-862.14.4.el7.x86_64
>Reporter: Sebastien Barnoud
>Priority: Major
>
> When authentication is activated there is no keep-alive on http(s) 
> connections.
> That's because the JDK Http(s)URLConnection explicitly closes the connection 
> after the HTTP 401 that negotiate the authentication.
> This lead to poor performance, especially when encryption is on.
> To see the issue, simply strace and compare the number of connection between 
> hdfs implementation and curl:
> {code:java}
> $ strace -T -tt -f hdfs dfs -ls 
> swebhdfs://dtltstap009.fr.world.socgen:50470/user 2>&1 | grep 
> "sin_port=htons(50470)" 
> [pid 92879] 15:11:47.019865 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16) = -1 
> EINPROGRESS (Operation now in progress) <0.000157>
> [pid 92879] 15:11:47.182110 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16  ...>
> [pid 92879] 15:11:47.387073 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16) = -1 
> EINPROGRESS (Operation now in progress) <0.000167>
> [pid 92879] 15:11:47.429716 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16  ...>
> [pid 93116] 15:11:47.528073 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16) = -1 
> EINPROGRESS (Operation now in progress) <0.000110>
> [pid 93116] 15:11:47.566947 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50470), sin_addr=inet_addr("192.163.201.117")}, 16  ...>
> => 6 connect{code}
> {code:java}
> $ strace -T -tt -f curl --negotiate -u: -v 
> https://dtltstap009.fr.world.socgen:50470/webhdfs/v1/user/?op=GETFILESTATUS 
> 2>&1 | grep "sin_port=htons(50470)" 
> 15:10:53.671358 connect(3, {sa_family=AF_INET, sin_port=htons(50470), 
> sin_addr=inet_addr("192.163.201.117")}, 16) = -1 EINPROGRESS (Operation now 
> in progress) <0.000118>
> 15:10:53.683513 getpeername(3, {sa_family=AF_INET, sin_port=htons(50470), 
> sin_addr=inet_addr("192.163.201.117")}, [16]) = 0 <0.09>
> 15:10:53.869482 getpeername(3, {sa_family=AF_INET, sin_port=htons(50470), 
> sin_addr=inet_addr("192.163.201.117")}, [16]) = 0 <0.09>
> 15:10:53.869576 getpeername(3, {sa_family=AF_INET, sin_port=htons(50470), 
> sin_addr=inet_addr("192.163.201.117")}, [16]) = 0 <0.08>
> [bash-4.2.46][j:0|h:4961|?:0][2019-06-21 
> 15:10:53][dtlprd05@nazare:~/test-hdfs]
> => only one connect{code}
>  
> In addition, even without encryption, too many connection are used:
> {code:java}
> $ strace -T -tt -f hdfs dfs -ls 
> webhdfs://dtltstap009.fr.world.socgen:50070/user 2>&1 | grep 
> "sin_port=htons(50070)" 
> [pid 99569] 15:13:13.838257 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50070), sin_addr=inet_addr("192.163.201.117")}, 16) = -1 
> EINPROGRESS (Operation now in progress) <0.000119>
> [pid 99569] 15:13:13.904255 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50070), sin_addr=inet_addr("192.163.201.117")}, 16  ...>
> [pid 99635] 15:13:14.201236 connect(386, {sa_family=AF_INET, 
> sin_port=htons(50070), sin_addr=inet_addr("192.163.201.117")}, 16  ...>
> => 3 connect{code}
>  
> Looking in the JDK code, 
> https://github.com/openjdk/jdk/blob/jdk8-b120/jdk/src/share/classes/sun/net/www/protocol/http/HttpURLConnection.java
> {code:java}
> serverAuthentication = getServerAuthentication(srvHdr);
> currentServerCredentials = serverAuthentication;
> if (serverAuthentication != null) {
> disconnectWeb();
> redirects++; // don't let things loop ad nauseum
> setCookieHeader();
> continue;
> }{code}
> disconnectWeb() will close the connection (no keep alive reuse)
> Finally we have some unexplained webhdfs command that are stucked in 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375):
> -) for hdfs dfs commands with swebhdfs schema
> -) for some TEZ job using the same implementation for the shuffle service 
> when encryption is on
> All other services (typically RPC) are working fine on the cluster.
> It really seams that Http(s)URLConnection causes some issues that Netty or 
> HttpClient don't have.
> 

[jira] [Created] (HDDS-1722) Add --init option in Ozone Recon to setup sqlite DB for creating its aggregate tables.

2019-06-21 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1722:
---

 Summary: Add --init option in Ozone Recon to setup sqlite DB for 
creating its aggregate tables.
 Key: HDDS-1722
 URL: https://issues.apache.org/jira/browse/HDDS-1722
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Recon
Reporter: Aravindan Vijayan
 Fix For: 0.4.1


cc [~vivekratnavel], [~swagle]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1638) Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1638?focusedWorklogId=264952=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264952
 ]

ASF GitHub Bot logged work on HDDS-1638:


Author: ASF GitHub Bot
Created on: 21/Jun/19 20:16
Start Date: 21/Jun/19 20:16
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #956: 
HDDS-1638.  Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296069181
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CommitKeyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CommitKeyResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+
+/**
+ * Handles CommitKey request.
+ */
+public class OMKeyCommitRequest extends OMClientRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequest.class);
+
+  public OMKeyCommitRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+Preconditions.checkNotNull(commitKeyRequest);
+
+KeyArgs keyArgs = commitKeyRequest.getKeyArgs();
+
+KeyArgs.Builder newKeyArgs =
+keyArgs.toBuilder().setModificationTime(Time.now());
+
+return getOmRequest().toBuilder()
+.setCommitKeyRequest(commitKeyRequest.toBuilder()
+.setKeyArgs(newKeyArgs)).setUserInfo(getUserInfo()).build();
+
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+Preconditions.checkNotNull(commitKeyRequest);
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+

[jira] [Work logged] (HDDS-1710) Publish JVM metrics via Hadoop metrics

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1710?focusedWorklogId=264945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264945
 ]

ASF GitHub Bot logged work on HDDS-1710:


Author: ASF GitHub Bot
Created on: 21/Jun/19 20:05
Start Date: 21/Jun/19 20:05
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #994: HDDS-1710. Publish 
JVM metrics via Hadoop metrics
URL: https://github.com/apache/hadoop/pull/994#issuecomment-504556609
 
 
   @elek Can this be done for HDDS client as well? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264945)
Time Spent: 0.5h  (was: 20m)

> Publish JVM metrics via Hadoop metrics
> --
>
> Key: HDDS-1710
> URL: https://issues.apache.org/jira/browse/HDDS-1710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, Ozone Datanode, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In ozone metrics can be published with the help of hadoop metrics (for 
> example via PrometheusMetricsSink)
> The basic jvm metrics are not published by the metrics system (just with JMX)
> I am very interested about the basic JVM metrics (gc count, heap memory 
> usage) to identify possible problems in the test environment.
> Fortunately it's very easy to turn it on with the help of 
> org.apache.hadoop.metrics2.source.JvmMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=264941=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264941
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 21/Jun/19 20:02
Start Date: 21/Jun/19 20:02
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #987: HDDS-1685. 
Recon: Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#discussion_r296376203
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
 ##
 @@ -227,20 +227,38 @@ public void testGetKeysForContainer() {
 assertEquals(blockIds.get(0L).iterator().next().getLocalID(), 103);
 assertEquals(blockIds.get(1L).iterator().next().getLocalID(), 104);
 
-response = containerKeyService.getKeysForContainer(3L, -1);
+response = containerKeyService.getKeysForContainer(3L, -1, "");
 keyMetadataList = (Collection) response.getEntity();
 assertTrue(keyMetadataList.isEmpty());
 
 // test if limit works as expected
-response = containerKeyService.getKeysForContainer(1L, 1);
+response = containerKeyService.getKeysForContainer(1L, 1, "");
 keyMetadataList = (Collection) response.getEntity();
 assertEquals(keyMetadataList.size(), 1);
+
+// test if start param works as expected
+response = containerKeyService.getKeysForContainer(
+1L, -1, "/sampleVol/bucketOne/key_one");
 
 Review comment:
   Are we testing all negative cases - Limit not being specified, Start key 
prefix not found ? Maybe we can split this UT into multiple ones. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264941)
Time Spent: 1h 40m  (was: 1.5h)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=264939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264939
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 21/Jun/19 19:59
Start Date: 21/Jun/19 19:59
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #987: HDDS-1685. Recon: 
Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-504554657
 
 
   @vivekratnavel Can we make sure we have unit test for every corresponding 
method in 
org.apache.hadoop.ozone.recon.spi.impl.TestContainerDBServiceProviderImpl?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264939)
Time Spent: 1.5h  (was: 1h 20m)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14403) Cost-Based RPC FairCallQueue

2019-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869830#comment-16869830
 ] 

Hadoop QA commented on HDFS-14403:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 123 unchanged - 7 fixed = 124 total (was 130) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972465/HDFS-14403.012.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 248a00423dab 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8194a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27031/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27031/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27031/console |
| Powered by | Apache Yetus 0.8.0  

[jira] [Work logged] (HDDS-1258) Fix error propagation for SCM protocol

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1258?focusedWorklogId=264919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264919
 ]

ASF GitHub Bot logged work on HDDS-1258:


Author: ASF GitHub Bot
Created on: 21/Jun/19 19:22
Start Date: 21/Jun/19 19:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1001: HDDS-1258 - Fix 
error propagation for SCM protocol
URL: https://github.com/apache/hadoop/pull/1001#issuecomment-504544047
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 542 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 903 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 356 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 560 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 302 | the patch passed |
   | +1 | cc | 302 | the patch passed |
   | +1 | javac | 302 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 821 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | +1 | findbugs | 601 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 313 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2057 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7654 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1001/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1001 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4569f10c283e 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cba13c7 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1001/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1001/2/testReport/ |
   | Max. process+thread count | 4501 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1001/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264919)
Time Spent: 1h 40m  (was: 1.5h)

> Fix error propagation for SCM protocol
> --
>
> Key: HDDS-1258
> URL: https://issues.apache.org/jira/browse/HDDS-1258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Critical
>  Labels: pull-request-available
>  Time 

[jira] [Updated] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1718:
-
Labels: pull-request-available  (was: )

> Increase Ratis Leader election timeout default to 10 seconds.
> -
>
> Key: HDDS-1718
> URL: https://issues.apache.org/jira/browse/HDDS-1718
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
> triggers a leader election thereby disturbing the steady state of the system 
> for more time than the GC pause delay.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1718?focusedWorklogId=264913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264913
 ]

ASF GitHub Bot logged work on HDDS-1718:


Author: ASF GitHub Bot
Created on: 21/Jun/19 19:17
Start Date: 21/Jun/19 19:17
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1004: HDDS-1718 
: Increase Ratis Leader election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004
 
 
   While testing out ozone with long running clients which continuously write 
data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
triggers a leader election thereby disturbing the steady state of the system 
for more time than the GC pause delay.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264913)
Time Spent: 10m
Remaining Estimate: 0h

> Increase Ratis Leader election timeout default to 10 seconds.
> -
>
> Key: HDDS-1718
> URL: https://issues.apache.org/jira/browse/HDDS-1718
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
> triggers a leader election thereby disturbing the steady state of the system 
> for more time than the GC pause delay.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1718?focusedWorklogId=264914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-264914
 ]

ASF GitHub Bot logged work on HDDS-1718:


Author: ASF GitHub Bot
Created on: 21/Jun/19 19:17
Start Date: 21/Jun/19 19:17
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1004: HDDS-1718 : 
Increase Ratis Leader election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004#issuecomment-504542949
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 264914)
Time Spent: 20m  (was: 10m)

> Increase Ratis Leader election timeout default to 10 seconds.
> -
>
> Key: HDDS-1718
> URL: https://issues.apache.org/jira/browse/HDDS-1718
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
> triggers a leader election thereby disturbing the steady state of the system 
> for more time than the GC pause delay.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1721:

Description: Client Metrics are not being pushed to the configured sink 
while running a hadoop command to write to Ozone.

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1721:

Summary: Client Metrics are not being pushed to the configured sink while 
running a hadoop command to write to Ozone.  (was: Client Metrics are not being 
pushed to the sink while running a hadoop command to write to Ozone.)

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1720) Add ability to configure RocksDB logs for OM and SCM.

2019-06-21 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869818#comment-16869818
 ] 

Dinesh Chitlangia commented on HDDS-1720:
-

[~avijayan] This will be a very useful addition. Thanks for filing this patch.

> Add ability to configure RocksDB logs for OM and SCM. 
> --
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869816#comment-16869816
 ] 

Eric Yang commented on HDDS-1609:
-

{quote} * I wouldn't like to choose a uid which has an already existing, 
different meaning, because (I think) that's very confusing. (eg. user of  
Apache Http server, Mysql server, NFS server, etc.){quote}

There are many software that uses the same uid.  For example, it is common 
practice to swap out open ssh daemon with Apache Mina sshd running with uid 74, 
or replace ncftp with vsftp with uid 14.  This practice is very common in Unix 
world.  Ozone fall in the similar category as a nfs server.  I think usage of 
rpcuser is ok, or choose dynamic unix local user uid 194 in [commonly accepted 
convention|http://pig.made-it.com/uidgid.html].

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1721) Client Metrics are not being pushed to the sink while running a hadoop command to write to Ozone.

2019-06-21 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1721:
---

 Summary: Client Metrics are not being pushed to the sink while 
running a hadoop command to write to Ozone.
 Key: HDDS-1721
 URL: https://issues.apache.org/jira/browse/HDDS-1721
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1720) Add ability to configure RocksDB logs for OM and SCM.

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1720:

Summary: Add ability to configure RocksDB logs for OM and SCM.   (was: Add 
ability to collect RocksDB logs for OM and SCM. )

> Add ability to configure RocksDB logs for OM and SCM. 
> --
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1720) Add ability to collect RocksDB logs for OM and SCM.

2019-06-21 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1720:
---

 Summary: Add ability to collect RocksDB logs for OM and SCM. 
 Key: HDDS-1720
 URL: https://issues.apache.org/jira/browse/HDDS-1720
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


While doing performance testing, it was seen that there was no way to get 
RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
useful mechanism to understand the health of Rocksdb while investigating large 
clusters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1719:

Description: While testing out ozone with long running clients which 
continuously write data, it was noted ratis logs were rolled 1-2 times every 
second. This adds unnecessary overhead to the pipeline thereby affecting write 
throughput. Increasing the size of the log segment to 1MB will decrease the 
overhead.

> Increase ratis log segment size to 1MB.
> ---
>
> Key: HDDS-1719
> URL: https://issues.apache.org/jira/browse/HDDS-1719
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.4.1
>
>
> While testing out ozone with long running clients which continuously write 
> data, it was noted ratis logs were rolled 1-2 times every second. This adds 
> unnecessary overhead to the pipeline thereby affecting write throughput. 
> Increasing the size of the log segment to 1MB will decrease the overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1719) Increase ratis log segment size to 1MB.

2019-06-21 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1719:
---

 Summary: Increase ratis log segment size to 1MB.
 Key: HDDS-1719
 URL: https://issues.apache.org/jira/browse/HDDS-1719
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Datanode
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.4.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1718) Increase Ratis Leader election timeout default to 10 seconds.

2019-06-21 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1718:
---

 Summary: Increase Ratis Leader election timeout default to 10 
seconds.
 Key: HDDS-1718
 URL: https://issues.apache.org/jira/browse/HDDS-1718
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Datanode
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.4.1


While testing out ozone with long running clients which continuously write 
data, it was noted that whenever a 1 second GC pause occurs in the leader, it 
triggers a leader election thereby disturbing the steady state of the system 
for more time than the GC pause delay.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14597) warning: non-array allocated with C++ array allocation operator

2019-06-21 Thread Yuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869807#comment-16869807
 ] 

Yuri commented on HDFS-14597:
-

Could you please build with clang and fix all errors/warnings that it produces?

 

Thank you!

Yuri

 

> warning: non-array allocated with C++ array allocation operator
> ---
>
> Key: HDFS-14597
> URL: https://issues.apache.org/jira/browse/HDFS-14597
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuri
>Priority: Major
>
> {code:java}
> WARNING] 
> /usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:125:21:
>  note: allocated with 'new[]' here
> [WARNING] char * buffer = new char[bufferSize];
> [WARNING] ^
> {code}
>  
> clang8 on FreeBSD.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14597) warning: non-array allocated with C++ array allocation operator

2019-06-21 Thread Yuri (JIRA)
Yuri created HDFS-14597:
---

 Summary: warning: non-array allocated with C++ array allocation 
operator
 Key: HDFS-14597
 URL: https://issues.apache.org/jira/browse/HDFS-14597
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yuri


{code:java}
WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:125:21:
 note: allocated with 'new[]' here
[WARNING] char * buffer = new char[bufferSize];
[WARNING] ^

{code}
 

clang8 on FreeBSD.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-06-21 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869798#comment-16869798
 ] 

Elek, Marton commented on HDDS-1609:


Thanks the answer [~eyang].

AFAIK, we don't support any other multi-host, containerized environment as of 
now, just kubernetes. For this is reason it's very hard to imagine this 
scenario, without the technical details.

If I understood well the problem here is the following:

 * If the system admin give permission accidentally to a user, that user can do 
bad things.

 * ((But ONLY if the the administrator runs our NON production docker images in 
production.))

I think we can live together with this risk.

(Let say we use uid=456, we have the same risk: if the administrator give 
permission to the uid=456 to login to the host it can kill uid=456 processes. 
Let's say we run ozone with root user: if the administrator makes it possible 
for everybody to login as user,)

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14596) error: invalid suffix on literal; C++11 requires a space between literal and identifier

2019-06-21 Thread Yuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuri updated HDFS-14596:

Description: 
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal]
[WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
[WARNING] ^
[WARNING]  

{code}
 Please add spaces around all such occurrences of these and similar formatting 
tokens.

  was:
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal]
[WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
[WARNING] ^
[WARNING]  

{code}
 Please add spaces around all such occurrences of these and similar formatting 
symbols.


> error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier
> ---
>
> Key: HDFS-14596
> URL: https://issues.apache.org/jira/browse/HDFS-14596
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuri
>Priority: Major
>
> clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned 
> with the quotation character:
> {code:java}
> [WARNING] 
> /usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
>  error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier [-Wreserved-user-defined-literal]
> [WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
> [WARNING] ^
> [WARNING]  
> {code}
>  Please add spaces around all such occurrences of these and similar 
> formatting tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14596) error: invalid suffix on literal; C++11 requires a space between literal and identifier

2019-06-21 Thread Yuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuri updated HDFS-14596:

Description: 
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal]
[WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
[WARNING] ^
[WARNING]  

{code}
 Please add spaces around all such occurrences of these and similar formatting 
symbols.

  was:
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal]
[WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
[WARNING] ^
[WARNING]  

{code}
 


> error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier
> ---
>
> Key: HDFS-14596
> URL: https://issues.apache.org/jira/browse/HDFS-14596
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuri
>Priority: Major
>
> clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned 
> with the quotation character:
> {code:java}
> [WARNING] 
> /usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
>  error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier [-Wreserved-user-defined-literal]
> [WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
> [WARNING] ^
> [WARNING]  
> {code}
>  Please add spaces around all such occurrences of these and similar 
> formatting symbols.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14596) error: invalid suffix on literal; C++11 requires a space between literal and identifier

2019-06-21 Thread Yuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuri updated HDFS-14596:

Description: 
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal]
[WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
[WARNING] ^
[WARNING]  

{code}
 

  was:
clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
// code placeholder
{code}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal] [WARNING] printf("Block size: 
%"PRId64"K\n", blockSize / 1024); [WARNING] ^


> error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier
> ---
>
> Key: HDFS-14596
> URL: https://issues.apache.org/jira/browse/HDFS-14596
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuri
>Priority: Major
>
> clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned 
> with the quotation character:
> {code:java}
> [WARNING] 
> /usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
>  error: invalid suffix on literal; C++11 requires a space between literal and 
> identifier [-Wreserved-user-defined-literal]
> [WARNING]   printf("Block size: %"PRId64"K\n", blockSize / 1024);
> [WARNING] ^
> [WARNING]  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14596) error: invalid suffix on literal; C++11 requires a space between literal and identifier

2019-06-21 Thread Yuri (JIRA)
Yuri created HDFS-14596:
---

 Summary: error: invalid suffix on literal; C++11 requires a space 
between literal and identifier
 Key: HDFS-14596
 URL: https://issues.apache.org/jira/browse/HDFS-14596
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yuri


clang8 on FreeBSD complains about PRId64, PRIu64 and similar being aligned with 
the quotation character:
{code:java}
// code placeholder
{code}
[WARNING] 
/usr/ports/devel/hadoop3/work/hadoop-3.2.0-src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/test/TestCompressions.cc:272:25:
 error: invalid suffix on literal; C++11 requires a space between literal and 
identifier [-Wreserved-user-defined-literal] [WARNING] printf("Block size: 
%"PRId64"K\n", blockSize / 1024); [WARNING] ^



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869781#comment-16869781
 ] 

Eric Yang commented on HDDS-1609:
-

{quote}Are we talking about kubernetes? I don't think that user John has any 
chance to access any of the nodes directly. Therefore there is no chance to 
kill any processes.{quote}

Please do not conflate Docker image issue with Kubernetes.  They are 
complementary technology and this mistake can also apply to Kubernetes cluster. 
 System admin who maintains OS for Kubernetes infrastructure may also stumble 
on this bug by accident.  Your reconsideration is appreciated.

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-06-21 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12345:
---
Attachment: HDFS-12345.008.patch

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch, HDFS-12345.003.patch, HDFS-12345.004.patch, 
> HDFS-12345.005.patch, HDFS-12345.006.patch, HDFS-12345.007.patch, 
> HDFS-12345.008.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-06-21 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869780#comment-16869780
 ] 

Erik Krogen commented on HDFS-12345:


Thanks [~jojochuang], that's a great catch. I changed it to the 24-hour 
timestamp by default (using 12-hour was a typo on my part), and made all of 
that parsing logic configurable to accommodate potentially different audit log 
formats. v008 patch attached.

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch, HDFS-12345.003.patch, HDFS-12345.004.patch, 
> HDFS-12345.005.patch, HDFS-12345.006.patch, HDFS-12345.007.patch, 
> HDFS-12345.008.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-06-21 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869777#comment-16869777
 ] 

Elek, Marton commented on HDDS-1609:


BTW, our docker image is NOT for production:

[https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Docker+images]

Just to make some progress: I am happy to use any other hard-coded uid value. 
Let's say 999 or 572, because it doesn't matter at all which number is chosen. 
But:

 * I would like to keep the hadoop username INSIDE the container (or ozone)

 * I wouldn't like to choose a uid which has an already existing, different 
meaning, because (I think) that's very confusing. (eg. user of  Apache Http 
server, Mysql server, NFS server, etc.)

 * I wouldn't like to do it until we have compatible hadoop/spark/hive docker 
images (because we need them for mapreduce/spark testing)

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14034) Support getQuotaUsage API in WebHDFS

2019-06-21 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HDFS-14034:
---

Assignee: Chao Sun  (was: Erik Krogen)

> Support getQuotaUsage API in WebHDFS
> 
>
> Key: HDFS-14034
> URL: https://issues.apache.org/jira/browse/HDFS-14034
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, webhdfs
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
>
> HDFS-8898 added support for a new API, {{getQuotaUsage}} which can fetch 
> quota usage on a directory with significantly lower impact than the similar 
> {{getContentSummary}}. This JIRA is to track adding support for this API to 
> WebHDFS. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-06-21 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869764#comment-16869764
 ] 

Chen Liang commented on HDFS-12979:
---

The failed tests were due to that after NN restarts, the attribute got reset. 
So need to resend the attribute in \{{MiniDFSCluster#restartNameNode}}, with 
this change, the failed tests passed in my local run. Post v013 patch.

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch, HDFS-12979.008.patch, 
> HDFS-12979.009.patch, HDFS-12979.010.patch, HDFS-12979.011.patch, 
> HDFS-12979.012.patch, HDFS-12979.013.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1567) Define a set of environment variables to configure Ozone docker image

2019-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869763#comment-16869763
 ] 

Eric Yang commented on HDDS-1567:
-

{quote}Give me more example, please. krb5.conf and jaas.config can be 
configured in a platform dependent way (for kubernetes with a configmap, for 
on-prem with creating the files)
{quote}
Sorry, I don't understand the ask here. I did not say anything about platform 
dependent. I only mentioned site dependent. For example, krb5.conf is usually 
managed as part of infrastructure via FreeIPA. It could get really complicated 
really fast, if we try to manage our own copy. Here is a copy of my test 
environment krb5.conf:
{code:java}
# Other applications require this directory to perform krb5 configuration.
includedir /etc/krb5.conf.d/

[libdefaults]
  renew_lifetime = 7d
  forwardable = true
  default_realm = EXAMPLE.COM
  ticket_lifetime = 24h
  dns_lookup_realm = false
  dns_lookup_kdc = false
  default_ccache_name = /tmp/krb5cc_%{uid}
  #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
  #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5

[logging]
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log

[realms]
  EXAMPLE.COM = {
admin_server = eyang-1
kdc = eyang-1
auth_to_local = RULE:[3:$3](b)/s/^.*$/guest/
  }

  EXAMPLE2.COM = {
admin_server = eyang-3
kdc = eyang-3
  }
{code}
As you can see that multiple realm are defined in krb5.conf, and this is 
usually auto-generated by infrastructure tools. Mounting krb5.conf from host, 
is usually better way to handle this to avoid over simplify OS configuration. 
Hadoop made a short cut with auth_to_local parsing from Hadoop's own 
configuration rather than krb5.conf. This creates extra work for system admin 
to ensure Hadoop auth_to_local and system level krb5.conf are aligned.

Here is a jaas configuration specifically for YARN to access ZooKeeper service:

Jaas configuration:
{code:java}
Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  useTicketCache=false
  keyTab="/etc/security/keytabs/rm.service.keytab"
  principal="rm/eyang-2.localdom...@example.com";
};
com.sun.security.jgss.krb5.initiate {
  com.sun.security.auth.module.Krb5LoginModule required
  renewTGT=false
  doNotPrompt=true
  useKeyTab=true
  keyTab="/etc/security/keytabs/rm.service.keytab"
  principal="rm/eyang-2.localdom...@example.com"
  storeKey=true
  useTicketCache=false;
};
{code}
There are common properties like principal, location of keytab which can be 
reused in ozone-site.xml. It is probably cheaper to create extension in 
envtoconf.py to manage jaas configuration. Something like krb5.conf is better 
left alone without modification.

{quote}
I am fine to handle all the other files (krb5.conf, jaas.config spark.conf) 
with envtoconf.py, and in this case the configuration of these files should be 
handled manually in on-prem.{quote}

Sounds good.

> Define a set of environment variables to configure Ozone docker image
> -
>
> Key: HDDS-1567
> URL: https://issues.apache.org/jira/browse/HDDS-1567
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
>
> For developer that tries to setup docker image by end for testing purpose, it 
> would be nice to predefine a set of environment variables that can be passed 
> to Ozone docker image to configure the minimum set of configuration to start 
> Ozone containers.  There is a python script that converts environment 
> variables to config, but documentation does not show what setting can be 
> passed to configure the system.  This task would be a good starting point to 
> document the available configuration knobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-06-21 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12979:
--
Attachment: HDFS-12979.013.patch

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch, HDFS-12979.008.patch, 
> HDFS-12979.009.patch, HDFS-12979.010.patch, HDFS-12979.011.patch, 
> HDFS-12979.012.patch, HDFS-12979.013.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-06-21 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869747#comment-16869747
 ] 

Elek, Marton commented on HDDS-1609:


Are we talking about kubernetes? I don't think that user John has any chance to 
access any of the nodes directly. Therefore there is no chance to kill any 
processes.

Or are we talking about local dev environments where ozone is started from 
docker-compose: in this case the example is unrealistic.

I think your concerns are applied to the on-prem installs and we have no 
pre-created user there.

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14403) Cost-Based RPC FairCallQueue

2019-06-21 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869737#comment-16869737
 ] 

Erik Krogen commented on HDFS-14403:


Thanks [~elgoiri]! Great suggestions. I have applied them all in the v012 
patch. I added a whole new test class {{TestWeightedTimeCostProvider}}, 
enhanced the existing test within {{TestDecayRpcScheduler}}, and added two 
additional test cases there. I also added a whole new documentation block 
within {{FairCallQueue.md}}.

> Cost-Based RPC FairCallQueue
> 
>
> Key: HDFS-14403
> URL: https://issues.apache.org/jira/browse/HDFS-14403
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc, namenode
>Reporter: Erik Krogen
>Assignee: Christopher Gregorian
>Priority: Major
>  Labels: qos, rpc
> Attachments: CostBasedFairCallQueueDesign_v0.pdf, 
> HDFS-14403.001.patch, HDFS-14403.002.patch, HDFS-14403.003.patch, 
> HDFS-14403.004.patch, HDFS-14403.005.patch, HDFS-14403.006.combined.patch, 
> HDFS-14403.006.patch, HDFS-14403.007.patch, HDFS-14403.008.patch, 
> HDFS-14403.009.patch, HDFS-14403.010.patch, HDFS-14403.011.patch, 
> HDFS-14403.012.patch, HDFS-14403.branch-2.8.patch
>
>
> HADOOP-15016 initially described extensions to the Hadoop FairCallQueue 
> encompassing both cost-based analysis of incoming RPCs, as well as support 
> for reservations of RPC capacity for system/platform users. This JIRA intends 
> to track the former, as HADOOP-15016 was repurposed to more specifically 
> focus on the reservation portion of the work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >