[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269679=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269679
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 05:56
Start Date: 29/Jun/19 05:56
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #846: HDDS-1555. Disable 
install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-506930186
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269679)
Time Spent: 4h 20m  (was: 4h 10m)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-06-28 Thread Paul Ward (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Ward reassigned HDFS-14610:


Assignee: (was: Paul Ward)

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-06-28 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12748:
---
Attachment: HDFS-12748.005.patch

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch, HDFS-12748.004.patch, HDFS-12748.005.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269676
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 04:31
Start Date: 29/Jun/19 04:31
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#discussion_r298787362
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -28,7 +28,6 @@
 import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
 import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
 import org.apache.ratis.proto.RaftProtos.RaftPeerRole;
-import org.apache.ratis.protocol.RaftGroup;
 import org.apache.ratis.protocol.RaftGroupId;
 import org.apache.ratis.server.RaftServer;
 import org.apache.ratis.server.impl.RaftServerConstants;
 
 Review comment:
   Lets use RaftLog.INVALID_LOG_INDEX in place of 
RaftServerConstants.INVALID_LOG_INDEX.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269676)
Time Spent: 4h 10m  (was: 4h)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269674=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269674
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 04:31
Start Date: 29/Jun/19 04:31
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#discussion_r298787310
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -104,6 +104,14 @@
 Byte limit for ratis leader's log appender queue.
 
   
+  
+dfs.container.ratis.log.purge.gap
+1024
 
 Review comment:
   lets use 1 billion here as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269674)
Time Spent: 4h  (was: 3h 50m)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269675
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 04:31
Start Date: 29/Jun/19 04:31
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#discussion_r298787316
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
 ##
 @@ -46,66 +46,66 @@
   @Override
   public boolean process(Set annotations,
   RoundEnvironment roundEnv) {
-if (roundEnv.processingOver()) {
 
 Review comment:
   This seems like an unintended change. Lets revert this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269675)
Time Spent: 4h  (was: 3h 50m)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875377#comment-16875377
 ] 

Hadoop QA commented on HDFS-12748:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:DistributedFileSystem.java:[line 135] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12748 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973225/HDFS-12748.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269672=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269672
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 29/Jun/19 03:59
Start Date: 29/Jun/19 03:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506924066
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 482 | trunk passed |
   | +1 | compile | 248 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 492 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 423 | the patch passed |
   | +1 | compile | 251 | the patch passed |
   | +1 | javac | 251 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | the patch passed |
   | +1 | findbugs | 509 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 264 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1712 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6745 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 224b6c328764 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/testReport/ |
   | Max. process+thread count | 5224 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269671=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269671
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 29/Jun/19 03:57
Start Date: 29/Jun/19 03:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506923969
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 255 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 447 | the patch passed |
   | +1 | compile | 266 | the patch passed |
   | +1 | javac | 266 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | the patch passed |
   | +1 | findbugs | 587 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 241 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1458 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6514 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2a1eccb59eb2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/6/testReport/ |
   | Max. process+thread count | 5184 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269671)
Time Spent: 1.5h  (was: 1h 20m)

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 

[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269670=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269670
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 29/Jun/19 03:50
Start Date: 29/Jun/19 03:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506923634
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 466 | trunk passed |
   | +1 | compile | 243 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 848 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 324 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 418 | the patch passed |
   | +1 | compile | 237 | the patch passed |
   | +1 | javac | 237 | the patch passed |
   | -0 | checkstyle | 33 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 637 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | +1 | findbugs | 506 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 152 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1319 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6015 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a5d964bda270 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/testReport/ |
   | Max. process+thread count | 5080 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time 

[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269668=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269668
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 29/Jun/19 03:43
Start Date: 29/Jun/19 03:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506923284
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 458 | trunk passed |
   | +1 | compile | 232 | trunk passed |
   | +1 | checkstyle | 58 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 754 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 309 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 493 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 440 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1035 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 5893 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0558f7d8d34f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/4/testReport/ |
   | Max. process+thread count | 4529 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269668)
Time Spent: 1h 10m  (was: 1h)

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 

[jira] [Commented] (HDFS-14429) Block remain in COMMITTED but not COMPLETE cause by Decommission

2019-06-28 Thread Yicong Cai (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875357#comment-16875357
 ] 

Yicong Cai commented on HDFS-14429:
---

[~jojochuang]

Before fixing this issue, the decommissing block will not complete the block, 
so the Redundancy check will not be performed. After fixing the problem, the 
Redundancy check will be performed and updateNeededReconstructions will be 
performed. The replication of the maintenance is Effective, but the 
decommission is not, so a neededReconstruction.update will cause curReplicas to 
be negative.

 
{code:java}
// handle low redundancy/extra redundancy
short fileRedundancy = getExpectedRedundancyNum(storedBlock);
if (!isNeededReconstruction(storedBlock, num, pendingNum)) {
  neededReconstruction.remove(storedBlock, numCurrentReplica,
  num.readOnlyReplicas(), num.outOfServiceReplicas(), fileRedundancy);
} else {
  // Perform update
  updateNeededReconstructions(storedBlock, curReplicaDelta, 0);
}
{code}
{code:java}
if (!hasEnoughEffectiveReplicas(block, repl, pendingNum)) {
  neededReconstruction.update(block, repl.liveReplicas() + pendingNum,
  repl.readOnlyReplicas(), repl.outOfServiceReplicas(),
  curExpectedReplicas, curReplicasDelta, expectedReplicasDelta);
}
{code}
{code:java}
synchronized void update(BlockInfo block, int curReplicas,
int readOnlyReplicas, int outOfServiceReplicas,
int curExpectedReplicas,
int curReplicasDelta, int expectedReplicasDelta) {
  // Cause Negative here
  int oldReplicas = curReplicas-curReplicasDelta;
  int oldExpectedReplicas = curExpectedReplicas-expectedReplicasDelta;
  int curPri = getPriority(block, curReplicas, readOnlyReplicas,
  outOfServiceReplicas, curExpectedReplicas);
  int oldPri = getPriority(block, oldReplicas, readOnlyReplicas,
  outOfServiceReplicas, oldExpectedReplicas);
  if(NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("LowRedundancyBlocks.update " +
  block +
  " curReplicas " + curReplicas +
  " curExpectedReplicas " + curExpectedReplicas +
  " oldReplicas " + oldReplicas +
  " oldExpectedReplicas  " + oldExpectedReplicas +
  " curPri  " + curPri +
  " oldPri  " + oldPri);
  }
  // oldPri is mostly correct, but not always. If not found with oldPri,
  // other levels will be searched until the block is found & removed.
  remove(block, oldPri, oldExpectedReplicas);
  if(add(block, curPri, curExpectedReplicas)) {
NameNode.blockStateChangeLog.debug(
"BLOCK* NameSystem.LowRedundancyBlock.update: {} has only {} "
+ "replicas and needs {} replicas so is added to "
+ "neededReconstructions at priority level {}",
block, curReplicas, curExpectedReplicas, curPri);

  }
}
{code}

> Block remain in COMMITTED but not COMPLETE cause by Decommission
> 
>
> Key: HDFS-14429
> URL: https://issues.apache.org/jira/browse/HDFS-14429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Attachments: HDFS-14429.01.patch, HDFS-14429.02.patch, 
> HDFS-14429.03.patch, HDFS-14429.branch-2.01.patch, 
> HDFS-14429.branch-2.02.patch
>
>
> In the following scenario, the Block will remain in the COMMITTED but not 
> COMPLETE state and cannot be closed properly:
>  # Client writes Block(bk1) to three data nodes (dn1/dn2/dn3).
>  # bk1 has been completely written to three data nodes, and the data node 
> succeeds FinalizeBlock, joins IBR and waits to report to NameNode.
>  # The client commits bk1 after receiving the ACK.
>  # When the DN has not been reported to the IBR, all three nodes dn1/dn2/dn3 
> enter Decommissioning.
>  # The DN reports the IBR, but the block cannot be completed normally.
>  
> Then it will lead to the following related exceptions:
> {panel:title=Exception}
> 2019-04-02 13:40:31,882 INFO namenode.FSNamesystem 
> (FSNamesystem.java:checkBlocksComplete(2790)) - BLOCK* 
> blk_4313483521_3245321090 is COMMITTED but not COMPLETE(numNodes= 3 >= 
> minimum = 1) in file xxx
> 2019-04-02 13:40:31,882 INFO ipc.Server (Server.java:logException(2650)) - 
> IPC Server handler 499 on 8020, call Call#122552 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from xxx:47615
> org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
> replicated yet: xxx
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2579)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846)
>  at 
> 

[jira] [Work logged] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?focusedWorklogId=269662=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269662
 ]

ASF GitHub Bot logged work on HDDS-1721:


Author: ASF GitHub Bot
Created on: 29/Jun/19 01:55
Start Date: 29/Jun/19 01:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1034: HDDS-1721 : 
Client Metrics are not being pushed to the configured sin…
URL: https://github.com/apache/hadoop/pull/1034#issuecomment-506917674
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 853 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 438 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | +1 | javac | 262 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 516 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 247 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1258 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6179 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1034 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6647914cd691 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/2/testReport/ |
   | Max. process+thread count | 5146 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client U: hadoop-hdds/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269662)
Time Spent: 50m  (was: 40m)

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop 

[jira] [Commented] (HDFS-12748) NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY

2019-06-28 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875343#comment-16875343
 ] 

Weiwei Yang commented on HDFS-12748:


Hi [~xkrogen]

Thanks for helping to review this, and sorry about the late response. I got 
pinged internally and users are running into this issue too. Let's work 
together to get this fixed.

Your comments make sense to me, I have fixed both of them in v4 patch except 
the first
{quote}I think rather than having a possibility of a null configuration and 
thus requiring a null check, it would be simpler to just supply a default conf 
object like what is done now.
{quote}
Are you suggesting we should have the null check before calling 
{{DFSUtilClient#getHomeDirectory}}? Why that is simpler?

Thanks

 

> NameNode memory leak when accessing webhdfs GETHOMEDIRECTORY
> 
>
> Key: HDFS-12748
> URL: https://issues.apache.org/jira/browse/HDFS-12748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: HDFS-12748.001.patch, HDFS-12748.002.patch, 
> HDFS-12748.003.patch, HDFS-12748.004.patch
>
>
> In our production environment, the standby NN often do fullgc, through mat we 
> found the largest object is FileSystem$Cache, which contains 7,844,890 
> DistributedFileSystem.
> By view hierarchy of method FileSystem.get() , I found only 
> NamenodeWebHdfsMethods#get call FileSystem.get(). I don't know why creating 
> different DistributedFileSystem every time instead of get a FileSystem from 
> cache.
> {code:java}
> case GETHOMEDIRECTORY: {
>   final String js = JsonUtil.toJsonString("Path",
>   FileSystem.get(conf != null ? conf : new Configuration())
>   .getHomeDirectory().toUri().getPath());
>   return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
> }
> {code}
> When we close FileSystem when GETHOMEDIRECTORY, NN don't do fullgc.
> {code:java}
> case GETHOMEDIRECTORY: {
>   FileSystem fs = null;
>   try {
> fs = FileSystem.get(conf != null ? conf : new Configuration());
> final String js = JsonUtil.toJsonString("Path",
> fs.getHomeDirectory().toUri().getPath());
> return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
>   } finally {
> if (fs != null) {
>   fs.close();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875342#comment-16875342
 ] 

Eric Yang edited comment on HDDS-1735 at 6/29/19 1:53 AM:
--

[~elek] {quote}Sure, this is what we do. Please check the scripts. They execute 
the pre-configured maven commands...{quote}

Shell script wrapper on maven plugin goals is not exactly intended for a normal 
maven project setup.  Maven provides ability to run plugin goals or build life 
cycles (phases).  It looks like we are using shell script to wrap around maven 
goals to configure plugin to run a certain way for continuous integration.  
This seems to redundant effort than writing pom.xml which describes plugin 
goals to run.  Can we try to use Maven intended model to avoid having to 
reinvent maven build life cycle, please?


was (Author: eyang):
[~elek] {quote}Sure, this is what we do. Please check the scripts. They execute 
the pre-configured maven commands...{quote}

This is not exactly intended for a normal maven project.  Maven provides 
ability to run plugin goals or build life cycles (phases).  It looks like we 
are using shell script to wrap around maven goals to configure plugin to run a 
certain way for continuous integration.  This seems to redundant effort than 
writing pom.xml which describes plugin goals to run.  Can we try to use Maven 
intended model to avoid having to reinvent maven build life cycle?

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875342#comment-16875342
 ] 

Eric Yang commented on HDDS-1735:
-

[~elek] {quote}Sure, this is what we do. Please check the scripts. They execute 
the pre-configured maven commands...{quote}

This is not exactly intended for a normal maven project.  Maven provides 
ability to run plugin goals or build life cycles (phases).  It looks like we 
are using shell script to wrap around maven goals to configure plugin to run a 
certain way for continuous integration.  This seems to redundant effort than 
writing pom.xml which describes plugin goals to run.  Can we try to use Maven 
intended model to avoid having to reinvent maven build life cycle?

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875341#comment-16875341
 ] 

Elek, Marton commented on HDDS-1735:


Sure, this is what we do. Please check the scripts. They execute the 
pre-configured maven commands...

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875337#comment-16875337
 ] 

Hadoop QA commented on HDDS-1734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 20s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 57s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
|   | hadoop.ozone.client.rpc.TestWatchForCommit |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2745/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973223/HDDS-1734.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux 9295dce65bb1 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / d203045 |
| Default Java | 1.8.0_212 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2745/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2745/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 

[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875336#comment-16875336
 ] 

Hadoop QA commented on HDDS-1734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 17s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973224/HDDS-1734.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux d0917505215f 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / d203045 |
| Default Java | 1.8.0_212 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/testReport/ |
| Max. process+thread count | 354 (vs. ulimit of 5500) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2746/console |
| versions | git=2.7.4 maven=3.3.9 

[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269645
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1035: 
HDDS-1735. Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#discussion_r298781323
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/checkstyle.sh
 ##
 @@ -13,7 +13,10 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-mvn -fn checkstyle:check -am -pl :hadoop-ozone-dist -Phdds
+mvn -B -fn checkstyle:check -f pom.ozone.xml
+
+#Print out the exact violations with parsing XML results with sed
+find -name checkstyle-errors.xml | xargs sed  '$!N; //d'
 
 Review comment:
   shellcheck:1: note: Some finds don't have a default path. Specify '.' 
explicitly. [SC2185]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269645)
Time Spent: 0.5h  (was: 20m)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269647
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1035: 
HDDS-1735. Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#discussion_r298781325
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/rat.sh
 ##
 @@ -16,7 +16,10 @@
 
 mkdir -p target
 rm target/rat-aggregated.txt
-mvn -fn org.apache.rat:apache-rat-plugin:0.13:check -am -pl :hadoop-ozone-dist 
-Phdds
+cd hadoop-hdds
 
 Review comment:
   shellcheck:1: warning: Use 'cd ... || exit' or 'cd ... || return' in case cd 
fails. [SC2164]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269647)
Time Spent: 50m  (was: 40m)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269646
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1035: 
HDDS-1735. Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#discussion_r298781324
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/integration.sh
 ##
 @@ -0,0 +1,25 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+export MAVEN_OPTS="-Xmx4096m"
+mvn -B install -f pom.ozone.xml -DskipTests
+mvn -B -fn test -f pom.ozone.xml -pl 
:hadoop-ozone-integration-test,:hadoop-ozone-filesystem
+module_failed_tests=$(find "." -name 'TEST*.xml'\
 
 Review comment:
   shellcheck:23: warning: Use -print0/-0 or -exec + to allow for 
non-alphanumeric filenames. [SC2038]
   shellcheck:49: note: This word is outside of quotes. Did you intend to 'nest 
'"'single quotes'"' instead'?  [SC2026]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269646)
Time Spent: 40m  (was: 0.5h)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269648
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1035: 
HDDS-1735. Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#discussion_r298781327
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/rat.sh
 ##
 @@ -16,7 +16,10 @@
 
 mkdir -p target
 rm target/rat-aggregated.txt
-mvn -fn org.apache.rat:apache-rat-plugin:0.13:check -am -pl :hadoop-ozone-dist 
-Phdds
+cd hadoop-hdds
+mvn -B -fn org.apache.rat:apache-rat-plugin:0.13:check
+cd ../hadoop-ozone
 
 Review comment:
   shellcheck:1: warning: Use 'cd ... || exit' or 'cd ... || return' in case cd 
fails. [SC2164]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269648)
Time Spent: 1h  (was: 50m)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269649
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1035: HDDS-1735. 
Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-506913281
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 472 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | pylint | 1 | Error running pylint. Please check pylint stderr files. |
   | +1 | shadedclient | 726 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 449 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 1 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 1 | There were no new pylint issues. |
   | -1 | shellcheck | 1 | The patch generated 8 new + 6 unchanged - 0 fixed = 
14 total (was 6) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 715 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | hadoop-hdds in the patch passed. |
   | +1 | unit | 168 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 2947 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1035 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
pylint |
   | uname | Linux c87588353a8c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d203045 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/artifact/out/branch-pylint-stderr.txt
 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/artifact/out/patch-pylint-stderr.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/fault-injection-test/network-tests 
U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269649)
Time Spent: 1h 10m  (was: 1h)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the 

[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269644
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:50
Start Date: 29/Jun/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1035: 
HDDS-1735. Create separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#discussion_r298781321
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/acceptance.sh
 ##
 @@ -15,5 +15,6 @@
 # limitations under the License.
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 export HADOOP_VERSION=3
-"$DIR/../../../hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/test-all.sh"
+OZONE_VERSION=$(cat $DIR/../../pom.xml  | grep "" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
 
 Review comment:
   shellcheck:21: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   shellcheck:21: note: Useless cat. Consider 'cmd < file | ..' or 'cmd file | 
..' instead. [SC2002]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269644)
Time Spent: 20m  (was: 10m)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875326#comment-16875326
 ] 

Hadoop QA commented on HDDS-1734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
59s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 23s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.client.rpc.TestWatchForCommit |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2744/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973220/HDDS-1734.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux ca344d9760e9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / d203045 |
| Default Java | 1.8.0_212 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2744/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2744/testReport/ |
| Max. process+thread count | 5148 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 

[jira] [Work logged] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?focusedWorklogId=269643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269643
 ]

ASF GitHub Bot logged work on HDDS-1611:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:35
Start Date: 29/Jun/19 00:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #973: HDDS-1611. 
Evaluate ACL on volume bucket key and prefix to authorize access. Contributed 
by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r298780485
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2276,11 +2325,21 @@ public void commitKey(OmKeyArgs args, long clientID)
 
   @Override
   public OmKeyLocationInfo allocateBlock(OmKeyArgs args, long clientID,
-  ExcludeList excludeList)
-  throws IOException {
+  ExcludeList excludeList) throws IOException {
 if(isAclEnabled) {
-  checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.WRITE,
-  args.getVolumeName(), args.getBucketName(), args.getKeyName());
+  try {
+checkAcls(ResourceType.KEY, StoreType.OZONE, ACLType.WRITE,
 
 Review comment:
   we need to make sure that Audit system knows about these ACL check failures.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269643)
Time Spent: 5h  (was: 4h 50m)

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-373) Genconf tool must generate ozone-site.xml with sample values

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-373?focusedWorklogId=269640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269640
 ]

ASF GitHub Bot logged work on HDDS-373:
---

Author: ASF GitHub Bot
Created on: 29/Jun/19 00:19
Start Date: 29/Jun/19 00:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1025: 
HDDS-373. Genconf tool must generate ozone-site.xml with sample values
URL: https://github.com/apache/hadoop/pull/1025#discussion_r298777642
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -41,6 +41,8 @@
   "dfs.container.ipc";
   public static final int DFS_CONTAINER_IPC_PORT_DEFAULT = 9859;
 
+  public static final String OZONE_METADATA_DIRS="ozone.metadata.dirs";
 
 Review comment:
   NIT: space
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269640)

> Genconf tool must generate ozone-site.xml with sample values
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-373.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml
>  
> Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option 
> to generate configs for starting pseudo-cluster. This should be useful for 
> quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-373) Genconf tool must generate ozone-site.xml with sample values

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-373?focusedWorklogId=269639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269639
 ]

ASF GitHub Bot logged work on HDDS-373:
---

Author: ASF GitHub Bot
Created on: 29/Jun/19 00:19
Start Date: 29/Jun/19 00:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1025: 
HDDS-373. Genconf tool must generate ozone-site.xml with sample values
URL: https://github.com/apache/hadoop/pull/1025#discussion_r298779284
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genconf/GenerateOzoneRequiredConfigurations.java
 ##
 @@ -106,9 +109,19 @@ public static void generateConfigurations(String path) 
throws
 
 for (OzoneConfiguration.Property p : allProperties) {
   if (p.getTag() != null && p.getTag().contains("REQUIRED")) {
-if(p.getName().equalsIgnoreCase(OzoneConfigKeys.OZONE_ENABLED)) {
+if (p.getName().equalsIgnoreCase(OzoneConfigKeys.OZONE_ENABLED)) {
   p.setValue(String.valueOf(Boolean.TRUE));
+} else if (p.getName().equalsIgnoreCase(
+OzoneConfigKeys.OZONE_METADATA_DIRS)) {
+  p.setValue(System.getProperty(OzoneConsts.JAVA_TMP_DIR));
+} else if (p.getName().equalsIgnoreCase(
 
 Review comment:
   This will be for local one node cluster?
   As when HA comes, this will not work. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269639)
Time Spent: 50m  (was: 40m)

> Genconf tool must generate ozone-site.xml with sample values
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-373.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml
>  
> Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option 
> to generate configs for starting pseudo-cluster. This should be useful for 
> quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?focusedWorklogId=269638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269638
 ]

ASF GitHub Bot logged work on HDDS-1611:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:17
Start Date: 29/Jun/19 00:17
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #973: HDDS-1611. 
Evaluate ACL on volume bucket key and prefix to authorize access. Contributed 
by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r298779110
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
 ##
 @@ -103,8 +103,8 @@ public void testAclParse() {
 testMatrix.put(" world::rw", Boolean.TRUE);
 testMatrix.put(" world::a", Boolean.TRUE);
 
-testMatrix.put(" world:bilbo:w", Boolean.FALSE);
-testMatrix.put(" world:bilbo:rw", Boolean.FALSE);
+testMatrix.put(" world:bilbo:w", Boolean.TRUE);
+testMatrix.put(" world:bilbo:rw", Boolean.TRUE);
 
 Review comment:
   This is a good catch. I think we should move to more explicit error, where 
if the users tries to set world and a user name, we should throw an error, that 
way user is not making a mistake, which we are silently ignoring.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269638)
Time Spent: 4h 50m  (was: 4h 40m)

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=269637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269637
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:12
Start Date: 29/Jun/19 00:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-506909979
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 322 | Maven dependency ordering for branch |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 257 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 967 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 340 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 550 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 461 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | -1 | javac | 94 | hadoop-hdds generated 3 new + 14 unchanged - 0 fixed = 
17 total (was 14) |
   | -0 | checkstyle | 39 | hadoop-hdds: The patch generated 14 new + 0 
unchanged - 0 fixed = 14 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 549 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 101 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1712 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7341 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.conf.TestOzoneConfiguration |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/846 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8e2386712003 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/testReport/ |
   | Max. process+thread count | 5106 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/config hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-846/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |

[jira] [Commented] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875321#comment-16875321
 ] 

Eric Yang commented on HDDS-1735:
-

[~elek] There are many dev-support scripts, which is not integrated with maven 
in Ozone.  Hadoop has already provided many of the plugins preconfigured such 
as findbugs, rat, unit tests.  There are many good parameter optimized for each 
of the plugins.  I see that Acceptance test is less detailed using Ozone 
approach than using Yetus.  Can we use the maven life cycle and maven plugins 
to accomplish the same thing as the dev-support scripts?

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?focusedWorklogId=269636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269636
 ]

ASF GitHub Bot logged work on HDDS-1611:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:07
Start Date: 29/Jun/19 00:07
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #973: HDDS-1611. 
Evaluate ACL on volume bucket key and prefix to authorize access. Contributed 
by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r298778333
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/IAccessAuthorizer.java
 ##
 @@ -56,11 +56,20 @@ boolean checkAccess(IOzoneObj ozoneObject, RequestContext 
context)
 ALL,
 NONE;
 private static int length = ACLType.values().length;
+private static ACLType[] vals = ACLType.values();
 
 public static int getNoOfAcls() {
   return length;
 }
 
+public static ACLType getAclTypeFromOrdinal(int ordinal) {
+  if (ordinal > length - 1) {
 
 Review comment:
   We should perhaps add a not less than zero check also , so we can throw 
exception correctly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269636)
Time Spent: 4h 40m  (was: 4.5h)

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?focusedWorklogId=269635=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269635
 ]

ASF GitHub Bot logged work on HDDS-1721:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:03
Start Date: 29/Jun/19 00:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1034: HDDS-1721 : 
Client Metrics are not being pushed to the configured sin…
URL: https://github.com/apache/hadoop/pull/1034#issuecomment-506908992
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 494 | trunk passed |
   | +1 | compile | 254 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 840 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 436 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 243 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1195 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6163 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1034 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 37c53f8b9574 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/1/testReport/ |
   | Max. process+thread count | 5341 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client U: hadoop-hdds/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1034/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269635)
Time Spent: 40m  (was: 0.5h)

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug

[jira] [Updated] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1735:
-
Labels: pull-request-available  (was: )

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=269634=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269634
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 29/Jun/19 00:00
Start Date: 29/Jun/19 00:00
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1035: HDDS-1735. Create 
separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035
 
 
   hadoop-ozone/dev-support/checks directory contains multiple helper script to 
execute different type of testing (findbugs, rat, unit, build).
   
   They easily define how tests should be executed, with the following contract:
   
    * The problems should be printed out to the console
   
    * in case of test failure a non zero exit code should be used
   
    
   
   The tests are working well (in fact I have some experiments with executing 
these scripts on k8s and argo where all the shell scripts are executed 
parallel) but we need some update:
   
    1. Most important: the unit tests and integration tests can be separated. 
Integration tests are more flaky and it's better to have a way to run only the 
normal unit tests
   
    2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
of the magical "am pl hadoop-ozone-dist" trick--
   
    3. To make it possible to run blockade test in containers we should use - T 
flag with docker-compose
   
    4. checkstyle violations are printed out to the console
   
   See: https://issues.apache.org/jira/browse/HDDS-1735
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269634)
Time Spent: 10m
Remaining Estimate: 0h

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1735:
---
Status: Patch Available  (was: Open)

> Create separate unit and integration test executor dev-support script
> -
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1734:

Attachment: HDDS-1734.003.patch

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch, 
> HDDS-1734.003.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1735) Create separate unit and integration test executor dev-support script

2019-06-28 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1735:
--

 Summary: Create separate unit and integration test executor 
dev-support script
 Key: HDDS-1735
 URL: https://issues.apache.org/jira/browse/HDDS-1735
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


hadoop-ozone/dev-support/checks directory contains multiple helper script to 
execute different type of testing (findbugs, rat, unit, build).

They easily define how tests should be executed, with the following contract:

 * The problems should be printed out to the console

 * in case of test failure a non zero exit code should be used

 

The tests are working well (in fact I have some experiments with executing 
these scripts on k8s and argo where all the shell scripts are executed 
parallel) but we need some update:

 1. Most important: the unit tests and integration tests can be separated. 
Integration tests are more flaky and it's better to have a way to run only the 
normal unit tests

 2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead of 
the magical "am pl hadoop-ozone-dist" trick--

 3. To make it possible to run blockade test in containers we should use - T 
flag with docker-compose

 4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1734:

Attachment: HDDS-1734.002.patch

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch, HDDS-1734.002.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?focusedWorklogId=269632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269632
 ]

ASF GitHub Bot logged work on HDDS-1611:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:45
Start Date: 28/Jun/19 23:45
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #973: HDDS-1611. 
Evaluate ACL on volume bucket key and prefix to authorize access. Contributed 
by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#discussion_r298776107
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -118,6 +118,10 @@
* */
   public static final String OZONE_ADMINISTRATORS =
   "ozone.administrators";
+  /**
+   * Make every user an admin.
 
 Review comment:
   Perhaps write more detailed comment here ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269632)
Time Spent: 4.5h  (was: 4h 20m)

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=269627=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269627
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:35
Start Date: 28/Jun/19 23:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1033: HDDS-1391 : Add 
ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-506905650
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 52 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 500 | trunk passed |
   | +1 | compile | 257 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 498 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for patch |
   | +1 | mvninstall | 453 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | cc | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 70 | hadoop-hdds generated 1 new + 14 unchanged - 0 fixed = 
15 total (was 14) |
   | +1 | findbugs | 518 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 108 | hadoop-hdds in the patch failed. |
   | -1 | unit | 209 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5173 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.utils.db.TestRDBStore |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1033 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 6169ac17e161 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/testReport/ |
   | Max. process+thread count | 1401 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1033/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269627)
Time Spent: 0.5h  (was: 20m)

> Add ability in OM to serve delta updates through an API.
> 

[jira] [Work logged] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?focusedWorklogId=269626=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269626
 ]

ASF GitHub Bot logged work on HDDS-1611:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:33
Start Date: 28/Jun/19 23:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #973: HDDS-1611. 
Evaluate ACL on volume bucket key and prefix to authorize access. Contributed 
by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/973#issuecomment-506905333
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 472 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   | -0 | patch | 378 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for patch |
   | -1 | mvninstall | 143 | hadoop-ozone in the patch failed. |
   | -1 | compile | 62 | hadoop-ozone in the patch failed. |
   | -1 | cc | 62 | hadoop-ozone in the patch failed. |
   | -1 | javac | 62 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 47 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | -1 | whitespace | 0 | The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | -1 | findbugs | 107 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 268 | hadoop-hdds in the patch passed. |
   | -1 | unit | 114 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 4811 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/973 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc shellcheck shelldocs |
   | uname | Linux f1341be5e17f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-973/10/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/dist hadoop-ozone/integration-test 

[jira] [Assigned] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-06-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-14610:
---

Assignee: Paul Ward

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Paul Ward (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875302#comment-16875302
 ] 

Paul Ward commented on HDFS-14618:
--

Hi Anu,

 

Btw, can you please also take a look at:

 

https://issues.apache.org/jira/browse/HDFS-14610

 

I suppose there may be the same problem with non-deterministic performance.

 

I also added there an explanation for why that patch does not introduce a 
deadlock

 

Thanks.

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0
>
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1734:

Description: 
Ozone is using tar stitching to create ozone tarball.  This prevents down 
stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project build.

Test procedure:
{code:java}
mvn -f pom.ozone.xml clean install -DskipTests -DskipShade -Dmaven.javadoc.skip 
-Pdist{code}
Expected result:

This will install tarball into:
{code:java}
~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}

Test procedure 2:
{code:java}
mvn -f pom.ozone.xml clean package -DskipTests -DskipShade -Dmaven.javadoc.skip 
-Pdist{code}
 
Expected result:
hadoop/hadoop-ozone/dist/target directory contains ozone-0.5.0-SNAPSHOT.tar.gz 
file.

  was:Ozone is using tar stitching to create ozone tarball.  This prevents down 
stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project build.


> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.
> Test procedure:
> {code:java}
> mvn -f pom.ozone.xml clean install -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
> Expected result:
> This will install tarball into:
> {code:java}
> ~/.m2/repository/org/apache/hadoop/hadoop-ozone-dist/0.5.0-SNAPSHOT/hadoop-ozone-dist-0.5.0-SNAPSHOT.tar.gz{code}
> Test procedure 2:
> {code:java}
> mvn -f pom.ozone.xml clean package -DskipTests -DskipShade 
> -Dmaven.javadoc.skip -Pdist{code}
>  
> Expected result:
> hadoop/hadoop-ozone/dist/target directory contains 
> ozone-0.5.0-SNAPSHOT.tar.gz file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-06-28 Thread Paul Ward (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875301#comment-16875301
 ] 

Paul Ward commented on HDFS-14610:
--

I do not know why the tests fail and I am not familiar with Hadoop internals to 
debug the tests.

 

However, note that the patch just adds a 

 

{{synchronized (storageMap) { }}

 

at line 455 

 

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]

 

just like the one at line 484 

 

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]

 

Line 455 and 484 are in the same method body.

 

I.e., this patch does not introduce a deadlock or such.

 

Maybe the tests fail due to non-deterministic performance in the containers, 
like here ?:

 

https://issues.apache.org/jira/browse/HDFS-14618

 

 

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=269622=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269622
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:20
Start Date: 28/Jun/19 23:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #987: HDDS-1685. Recon: 
Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-506903544
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 504 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 522 | trunk passed |
   | +1 | compile | 255 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 841 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 351 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 561 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 449 | the patch passed |
   | +1 | compile | 259 | the patch passed |
   | +1 | javac | 259 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | -1 | findbugs | 307 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 162 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2095 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 7470 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/987 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fdd0cb847994 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/testReport/ |
   | Max. process+thread count | 3485 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269622)
Time Spent: 3h 10m  (was: 3h)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: 

[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1734:

Attachment: HDDS-1734.001.patch

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1734:

Status: Patch Available  (was: Open)

> Use maven assembly to create ozone tarball image
> 
>
> Key: HDDS-1734
> URL: https://issues.apache.org/jira/browse/HDDS-1734
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1734.001.patch
>
>
> Ozone is using tar stitching to create ozone tarball.  This prevents down 
> stream project to use Ozone tarball as a dependency.  It would be nice to 
> create Ozone tarball with maven assembly plugin to have ability to cache 
> ozone tarball in maven repository.  This ability allows docker build to be a 
> separate sub-module and referencing to Ozone tarball.  This change can help 
> docker development to be more agile without making a full project build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Paul Ward (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875294#comment-16875294
 ] 

Paul Ward commented on HDFS-14618:
--

Hi Anu,

 

Ok, thank you for helping with those tests!

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0
>
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875291#comment-16875291
 ] 

Hadoop QA commented on HDFS-14618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973209/race.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bbf74ec91811 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f02b0e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27111/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27111/testReport/ |
| Max. process+thread count | 3665 (vs. ulimit of 1) |
| modules | C: 

[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269614
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:08
Start Date: 28/Jun/19 23:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506901721
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 100 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 471 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 929 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 342 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 571 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 428 | the patch passed |
   | +1 | compile | 276 | the patch passed |
   | +1 | javac | 276 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 763 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 78 | hadoop-hdds in the patch passed. |
   | +1 | javadoc | 90 | hadoop-ozone generated 0 new + 9 unchanged - 23 fixed 
= 9 total (was 32) |
   | +1 | findbugs | 506 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 265 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1347 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6633 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1026 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d3b1b36b9379 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 29465bf |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/3/testReport/ |
   | Max. process+thread count | 5070 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1026/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269614)
Time Spent: 1h  (was: 50m)

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 

[jira] [Updated] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-14618:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0
>
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875287#comment-16875287
 ] 

Anu Engineer commented on HDFS-14618:
-

[~paulward24] Thank you for your contribution. I have committed this patch to 
the trunk.

 

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875286#comment-16875286
 ] 

Hudson commented on HDFS-14618:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16840 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16840/])
HDFS-14618. Incorrect synchronization of ArrayList field (ArrayList is 
(aengineer: rev d203045c3024b134d7a0417d1ea3a60d03a1534a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java


> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14573) Backport Standby Read to branch-3

2019-06-28 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875285#comment-16875285
 ] 

Chen Liang commented on HDFS-14573:
---

Thanks [~shv]! I've pushed the branch-3.2.004 patch to branch-3.2, working on 
branch-3.1 and 3.0. 

> Backport Standby Read to branch-3
> -
>
> Key: HDFS-14573
> URL: https://issues.apache.org/jira/browse/HDFS-14573
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14573-branch-3.0.001.patch, 
> HDFS-14573-branch-3.1.001.patch, HDFS-14573-branch-3.2.001.patch, 
> HDFS-14573-branch-3.2.002.patch, HDFS-14573-branch-3.2.003.patch, 
> HDFS-14573-branch-3.2.004.patch
>
>
> This Jira tracks backporting the feature consistent read from standby 
> (HDFS-12943) to branch-3.x, including 3.0, 3.1, 3.2. This is required for 
> backporting to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?focusedWorklogId=269608=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269608
 ]

ASF GitHub Bot logged work on HDDS-1733:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:04
Start Date: 28/Jun/19 23:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1031: HDDS-1733. Fix 
Ozone documentation
URL: https://github.com/apache/hadoop/pull/1031#issuecomment-506900961
 
 
   Ohh I see it got committed by the time I have posted it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269608)
Time Spent: 1h 50m  (was: 1h 40m)

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875284#comment-16875284
 ] 

Anu Engineer commented on HDFS-14618:
-

Both of these tests failed due to test time out, since they are running under 
containers and hardware is shared and overloaded, I am going to commit this 
patch. Thanks

 

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875282#comment-16875282
 ] 

Anu Engineer commented on HDFS-14618:
-

On it, I will check to see why 2 tests seems to have failed. Some of these 
tests fail due to spurious reasons not related to your patch. I will take a 
look and update here.

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1734) Use maven assembly to create ozone tarball image

2019-06-28 Thread Eric Yang (JIRA)
Eric Yang created HDDS-1734:
---

 Summary: Use maven assembly to create ozone tarball image
 Key: HDDS-1734
 URL: https://issues.apache.org/jira/browse/HDDS-1734
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Eric Yang
Assignee: Eric Yang


Ozone is using tar stitching to create ozone tarball.  This prevents down 
stream project to use Ozone tarball as a dependency.  It would be nice to 
create Ozone tarball with maven assembly plugin to have ability to cache ozone 
tarball in maven repository.  This ability allows docker build to be a separate 
sub-module and referencing to Ozone tarball.  This change can help docker 
development to be more agile without making a full project build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14611) Move handshake secret field from Token to BlockAccessToken

2019-06-28 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875281#comment-16875281
 ] 

Konstantin Shvachko commented on HDFS-14611:


Looks reasonable overall, simpler than the previous approach, and preserves 
compatibility, which is the key point. Minor comments:
# {{SaslDataTransferServer}} - some unused imports still need to be reverted
# {{public byte[] createPassword(BlockTokenIdentifier identifier)}} - may not 
need to be public
# Update JavaDoc for the new parameter in constructor 
{{BlockTokenSecretManager()}}
# Also check the checkstyle warnings.

> Move handshake secret field from Token to BlockAccessToken
> --
>
> Key: HDFS-14611
> URL: https://issues.apache.org/jira/browse/HDFS-14611
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Blocker
> Attachments: HDFS-14611.001.patch, HDFS-14611.002.patch
>
>
> Currently the handshake secret is included in Token, but conceptually this 
> should belong to Block Access Token only. More importantly, having this field 
> in Token could potentially break compatibility. Moreover, having this field 
> as part of Block Access Token also means we may not need to encrypt this 
> field anymore, because block access token is already encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1733.

   Resolution: Fixed
Fix Version/s: 0.4.1

[~dineshchitlangia] Thanks for the contribution. I have committed this patch to 
the trunk.

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875279#comment-16875279
 ] 

Hudson commented on HDDS-1733:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16839 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16839/])
HDDS-1733. Fix Ozone documentation (#1031) (aengineer: rev 
da568996afdeb42c401338607ba623ffa5741422)
* (edit) hadoop-hdds/docs/content/start/_index.md
* (edit) hadoop-hdds/docs/content/_index.md
* (edit) hadoop-hdds/docs/content/beyond/Containers.md
* (add) hadoop-hdds/docs/content/recipe/prometheus-key-allocate.png
* (add) hadoop-hdds/docs/content/recipe/prometheus.png
* (delete) hadoop-hdds/docs/static/prometheus-key-allocate.png
* (edit) hadoop-hdds/docs/content/recipe/Prometheus.md
* (delete) hadoop-hdds/docs/static/prometheus.png


> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?focusedWorklogId=269603=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269603
 ]

ASF GitHub Bot logged work on HDDS-1733:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:59
Start Date: 28/Jun/19 22:59
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1031: HDDS-1733. 
Fix Ozone documentation
URL: https://github.com/apache/hadoop/pull/1031
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269603)
Time Spent: 1.5h  (was: 1h 20m)

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?focusedWorklogId=269604=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269604
 ]

ASF GitHub Bot logged work on HDDS-1733:


Author: ASF GitHub Bot
Created on: 28/Jun/19 23:00
Start Date: 28/Jun/19 23:00
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1031: HDDS-1733. Fix 
Ozone documentation
URL: https://github.com/apache/hadoop/pull/1031#issuecomment-506900346
 
 
   @dineshchitlangia  Thanks for the contribution. I have committed this patch 
to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269604)
Time Spent: 1h 40m  (was: 1.5h)

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875270#comment-16875270
 ] 

Hadoop QA commented on HDDS-1201:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
26s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdds: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-ozone: The patch generated 101 new + 0 
unchanged - 0 fixed = 101 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
45s{color} | {color:red} hadoop-hdds generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
25s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 50s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds |
|  |  Switch statement found in 

[jira] [Work logged] (HDDS-1201) Reporting Corruptions in Containers to SCM

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1201?focusedWorklogId=269592=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269592
 ]

ASF GitHub Bot logged work on HDDS-1201:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:31
Start Date: 28/Jun/19 22:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1032: [HDDS-1201] 
Reporting corrupted containers info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-506895476
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1328 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 478 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 948 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 326 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 532 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 437 | the patch passed |
   | +1 | compile | 301 | the patch passed |
   | +1 | javac | 301 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-hdds: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 101 new + 0 
unchanged - 0 fixed = 101 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 748 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | -1 | findbugs | 225 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   | -1 | findbugs | 265 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 170 | hadoop-hdds in the patch failed. |
   | -1 | unit | 273 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6750 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Switch statement found in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.check() where 
default case is missing  At KeyValueContainer.java:where default case is 
missing  At KeyValueContainer.java:[lines 655-669] |
   | Failed junit tests | 
hadoop.ozone.container.keyvalue.TestKeyValueContainerCheck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1032 |
   | JIRA Issue | HDDS-1201 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a8e6cfd7c2b2 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f02b0e1 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/testReport/ |
   | Max. process+thread count | 1287 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1032/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | 

[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=269590=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269590
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:26
Start Date: 28/Jun/19 22:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #987: 
HDDS-1685. Recon: Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#discussion_r298765130
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -69,16 +74,20 @@
 
   /**
* Return @{@link org.apache.hadoop.ozone.recon.api.types.ContainerMetadata}
-   * for all the containers.
+   * for the containers starting from the given "prev-key" query param for the
+   * given "limit". The given "prev-key" is skipped from the results returned.
*
+   * @param limit max no. of containers to get.
+   * @param prevKey the containerID after which results are returned.
* @return {@link Response}
*/
   @GET
   public Response getContainers(
-  @DefaultValue("-1") @QueryParam("limit") int limit) {
+  @DefaultValue(FETCH_ALL) @QueryParam(RECON_QUERY_LIMIT) int limit,
+  @DefaultValue("0") @QueryParam(RECON_QUERY_PREVKEY) long prevKey) {
 
 Review comment:
   Minor NIT: Can we have a variable for this default value too?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269590)
Time Spent: 3h  (was: 2h 50m)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?focusedWorklogId=269589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269589
 ]

ASF GitHub Bot logged work on HDDS-1721:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:25
Start Date: 28/Jun/19 22:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1034: HDDS-1721 : 
Client Metrics are not being pushed to the configured sin…
URL: https://github.com/apache/hadoop/pull/1034#issuecomment-506894436
 
 
   Hi @avijayanhwx 
   Thanks for the contribution.
   Is there someplace, we can document this? so that it will be easy to find 
out this information.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269589)
Time Spent: 0.5h  (was: 20m)

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=269587=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269587
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:23
Start Date: 28/Jun/19 22:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #930: HDDS-1651. 
Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#issuecomment-506894094
 
 
   Thank You @shwetayakkali for the contribution.
   Closing this based on the above discussion.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269587)
Time Spent: 3h 50m  (was: 3h 40m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=269588=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269588
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:23
Start Date: 28/Jun/19 22:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #930: 
HDDS-1651. Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269588)
Time Spent: 4h  (was: 3h 50m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=269586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269586
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:22
Start Date: 28/Jun/19 22:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #930: 
HDDS-1651. Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r298764410
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -31,6 +31,7 @@
 import java.util.Optional;
 import java.util.TimeZone;
 
+import org.apache.hadoop.HadoopIllegalArgumentException;
 
 Review comment:
   Thank You @eyanghwx for the confirmation.
   I will close this as Won't fix.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269586)
Time Spent: 3h 40m  (was: 3.5h)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1651) Create a http.policy config for Ozone

2019-06-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1651.
--
Resolution: Won't Fix

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=269584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269584
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:20
Start Date: 28/Jun/19 22:20
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1033: HDDS-1391 : Add 
ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033#issuecomment-506893550
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269584)
Time Spent: 20m  (was: 10m)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?focusedWorklogId=269583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269583
 ]

ASF GitHub Bot logged work on HDDS-1721:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:20
Start Date: 28/Jun/19 22:20
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1034: HDDS-1721 : 
Client Metrics are not being pushed to the configured sin…
URL: https://github.com/apache/hadoop/pull/1034#issuecomment-506893508
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269583)
Time Spent: 20m  (was: 10m)

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?focusedWorklogId=269582=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269582
 ]

ASF GitHub Bot logged work on HDDS-1721:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:19
Start Date: 28/Jun/19 22:19
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1034: HDDS-1721 
: Client Metrics are not being pushed to the configured sin…
URL: https://github.com/apache/hadoop/pull/1034
 
 
   …k while running a hadoop command to write to Ozone.
   
   Metrics system needs to be initialized for the sink configs to be picked up. 
   
   Manually tested the change. After this change, if the 
hadoop-metrics2.properties contains sin properties like such, client metrics 
will be pushed to sink.
   
   xceiverclientmetrics.period=10
   xceiverclientmetrics.sink..plugin.urls=/path/to/JAR
   xceiverclientmetrics.sink..interval=10
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269582)
Time Spent: 10m
Remaining Estimate: 0h

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1721:
-
Labels: pull-request-available  (was: )

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14619) chmod changes the mask when ACL is enabled

2019-06-28 Thread Istvan Fajth (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875262#comment-16875262
 ] 

Istvan Fajth commented on HDFS-14619:
-

Based on the description and the explanation and also further reviewing the 
things, I think this is not a problem, [~smeng] if you think so as well, then 
we may close this one as well as I have closed the related issue already.

> chmod changes the mask when ACL is enabled
> --
>
> Key: HDFS-14619
> URL: https://issues.apache.org/jira/browse/HDFS-14619
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Priority: Major
>
> When setting a directory's permission with HDFS shell chmod, it changes the 
> ACL mask instead of the permission bits:
> {code:bash}
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 777 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx
> group::rwx
> mask::rwx
> other::rwx
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u hdfs hdfs dfs -chmod 755 /user/hive/warehouse/exttablename/key=1/
> $ sudo -u impala hdfs dfs -getfacl /user/hive/warehouse/exttablename/key=1/
> # file: /user/hive/warehouse/exttablename/key=1
> # owner: hive
> # group: hive
> user::rwx
> user:impala:rwx   #effective:r-x
> group::rwx#effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:impala:rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> $ sudo -u impala hdfs dfs -touch /user/hive/warehouse/exttablename/key=1/file
> touch: Permission denied: user=impala, access=WRITE, 
> inode="/user/hive/warehouse/exttablename/key=1/file":hive:hive:drwxr-xr-x
> {code}
> The cluster has dfs.namenode.acls.enabled=true and 
> dfs.namenode.posix.acl.inheritance.enabled=true.
> As far as I understand, the chmod should change the permission bits instead 
> of the ACL mask. CMIIW
> Might be related to HDFS-14517. [~pifta]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14517) Display bug in permissions when ACL mask is defined

2019-06-28 Thread Istvan Fajth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth resolved HDFS-14517.
-
Resolution: Not A Problem

> Display bug in permissions when ACL mask is defined
> ---
>
> Key: HDFS-14517
> URL: https://issues.apache.org/jira/browse/HDFS-14517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Tested on latest CDH integration, and CDH5 as well with 
> the same result.
>Reporter: Istvan Fajth
>Priority: Minor
>
> When ACLs are enabled on a folder, the following sequence of commands provide 
> the following result:
>  
> {{$ hdfs dfs -mkdir /tmp/acl
> $ hdfs dfs -ls /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr-xr-x   - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -getfacl /tmp/acl
> # file: /tmp/acl
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::r-x
> other::r-x
> $ hdfs dfs -setfacl -m mask::rwx /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxrwxr-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> drwx-wx-wx   - hive   supergroup          0 2019-05-27 23:48 /tmp/hive
> drwxrwxrwt   - mapred hadoop              0 2019-05-28 01:32 /tmp/logs
> $ hdfs dfs -setfacl -m mask::r-- /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr--r-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -setfacl -m mask::r-x /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr-xr-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -getfacl /tmp/acl
> # file: /tmp/acl
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::r-x
> mask::r-x
> other::r-x}}
>  
> So the group permission representation is changing with the defined mask ACL 
> instead of the group ACL or, maybe even better, the effective group ACL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14517) Display bug in permissions when ACL mask is defined

2019-06-28 Thread Istvan Fajth (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875261#comment-16875261
 ] 

Istvan Fajth commented on HDFS-14517:
-

This is pretty much misleading in the following scenario:

 
{code:java}
$ hdfs groups systest
 systest : systest testacl
$ klist
 Ticket cache: FILE:/tmp/krb5cc_0
 Default principal: syst...@vpc.cloudera.com
Valid starting Expires Service principal
 06/28/2019 14:55:59 06/28/2019 15:20:59 
krbtgt/vpc.cloudera@vpc.cloudera.com
 renew until 06/28/2019 16:25:59


$ hdfs dfs -ls /tmp2
 Found 1 items
 drwxrwxr-x+ - hdfs testacl 0 2019-06-28 14:34 /tmp2/testacl
$ hdfs dfs -touchz /tmp2/testacl/file1
 touchz: Permission denied: user=systest, access=WRITE, 
inode="/tmp2/testacl":hdfs:testacl:drwxrwxr-x
$ hdfs dfs -getfacl /tmp2/testacl

file: /tmp2/testacl owner: hdfs group: testacl
 user::rwx
 group::r-x
 mask::rwx
 other::r-x
 
{code}
So here we have a mask of rwx, and a group permission of r-x. The ls displays 
the rwx from the mask as the group permission, while the effective permission 
in the group ACL correctly prevent the write.

 

I have validated, and it is working the same way in a Linux (CentOS) system as 
well, so it seems to be something that is not a problem at all, and we comply 
with POSIX here properly.

I guess I am closing this ticket as not a problem.

> Display bug in permissions when ACL mask is defined
> ---
>
> Key: HDFS-14517
> URL: https://issues.apache.org/jira/browse/HDFS-14517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Tested on latest CDH integration, and CDH5 as well with 
> the same result.
>Reporter: Istvan Fajth
>Priority: Minor
>
> When ACLs are enabled on a folder, the following sequence of commands provide 
> the following result:
>  
> {{$ hdfs dfs -mkdir /tmp/acl
> $ hdfs dfs -ls /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr-xr-x   - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -getfacl /tmp/acl
> # file: /tmp/acl
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::r-x
> other::r-x
> $ hdfs dfs -setfacl -m mask::rwx /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxrwxr-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> drwx-wx-wx   - hive   supergroup          0 2019-05-27 23:48 /tmp/hive
> drwxrwxrwt   - mapred hadoop              0 2019-05-28 01:32 /tmp/logs
> $ hdfs dfs -setfacl -m mask::r-- /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr--r-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -setfacl -m mask::r-x /tmp/acl
> $ hdfs dfs -ls /tmp
> Found 1 items
> drwxr-xr-x+  - hdfs   supergroup          0 2019-05-28 11:48 /tmp/acl
> $ hdfs dfs -getfacl /tmp/acl
> # file: /tmp/acl
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::r-x
> mask::r-x
> other::r-x}}
>  
> So the group permission representation is changing with the defined mask ACL 
> instead of the group ACL or, maybe even better, the effective group ACL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-06-28 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1391:

Description: 
Added an RPC end point to serve the set of updates in OM RocksDB from a given 
sequence number.
This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
will keep their aggregate data up to date. 

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Added an RPC end point to serve the set of updates in OM RocksDB from a given 
> sequence number.
> This will be used by Recon (HDDS-1105) to push the data to all the tasks that 
> will keep their aggregate data up to date. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1391:
-
Labels: pull-request-available  (was: )

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?focusedWorklogId=269568=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269568
 ]

ASF GitHub Bot logged work on HDDS-1391:


Author: ASF GitHub Bot
Created on: 28/Jun/19 22:08
Start Date: 28/Jun/19 22:08
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1033: HDDS-1391 
: Add ability in OM to serve delta updates through an API.
URL: https://github.com/apache/hadoop/pull/1033
 
 
   Added an RPC end point to serve the set of updates in OM RocksDB from a 
given sequence number.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269568)
Time Spent: 10m
Remaining Estimate: 0h

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875254#comment-16875254
 ] 

Hadoop QA commented on HDFS-14610:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973204/addingSynchronization.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ada8d067ff3 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f02b0e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27109/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27109/testReport/ |
| Max. process+thread count | 3492 (vs. ulimit of 

[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Paul Ward (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875251#comment-16875251
 ] 

Paul Ward commented on HDFS-14618:
--

Hi Anu,

 

Thank you for assigning me this.

 

I am not familiar with Hadoop internals to debug those tests.

 

However, note that the patch grabs the locks in the same order as here:

 

[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L257-L267]
 

 

I.e., this patch does *not* introduce a deadlock.

 

Beyond that, I don't know why this patch would cause anything to fail.

 

Can you please take a look?

 

Thanks

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=269559=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269559
 ]

ASF GitHub Bot logged work on HDDS-1685:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:41
Start Date: 28/Jun/19 21:41
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #987: HDDS-1685. Recon: 
Add support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-506885697
 
 
   +1 LGTM
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269559)
Time Spent: 2h 50m  (was: 2h 40m)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875243#comment-16875243
 ] 

Hadoop QA commented on HDFS-14618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973205/race.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 16e47c662468 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f02b0e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27110/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27110/testReport/ |
| Max. process+thread count | 4865 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Updated] (HDDS-1723) Create new OzoneManagerLock class

2019-06-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1723:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HDDS-1672

> Create new OzoneManagerLock class
> -
>
> Key: HDDS-1723
> URL: https://issues.apache.org/jira/browse/HDDS-1723
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> This Jira is to use bit manipulation, instead of hashmap in OzoneManager lock 
> logic. And also this Jira follows the locking order based on the document 
> attached to HDDS-1672 jira.
> This Jira is created based on [~anu] comment during review of HDDS-1672.
> Not a suggestion for this patch. But more of a question, should we just 
> maintain a bitset here, and just flip that bit up and down to see if the lock 
> is held. Or we can just maintain 32 bit integer, and we can easily find if a 
> lock is held by Xoring with the correct mask. I feel that might be super 
> efficient. [@nandakumar131|https://github.com/nandakumar131] . But as I said 
> let us not do that in this patch.
>  
> This Jira will add new class, integration of this new class into code will be 
> done in a new jira. 
> Clean up of old code also will be done in new jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1727) Use generation of resourceName for locks in OzoneManagerLock

2019-06-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1727:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-1672

> Use generation of resourceName for locks in OzoneManagerLock
> 
>
> Key: HDDS-1727
> URL: https://issues.apache.org/jira/browse/HDDS-1727
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall use generate Resourcename from actual resource names 
> like volume/bucket/user/key inside OzoneManagerLock. In this way, users using 
> these locking API's no need to worry of calling these additional API of 
> generateResourceName in OzoneManagerLockUtil. And this also reduces code 
> during acquiring locks in OM operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-06-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875242#comment-16875242
 ] 

Hadoop QA commented on HDFS-14618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973203/race.diff |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9e9e79c0fa0 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f02b0e1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | 

[jira] [Updated] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1730:
-
Target Version/s: 0.5.0

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1730
> URL: https://issues.apache.org/jira/browse/HDDS-1730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createDirectory request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1672) Improve locking in OzoneManager

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1672?focusedWorklogId=269535=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269535
 ]

ASF GitHub Bot logged work on HDDS-1672:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:06
Start Date: 28/Jun/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1016: HDDS-1672. 
Improve locking in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1016#issuecomment-506876952
 
 
   Thank You @anuengineer for the review.
   I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269535)
Time Spent: 10h 40m  (was: 10.5h)

> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other 
> idea/suggestions are welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1672) Improve locking in OzoneManager

2019-06-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1672:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other 
> idea/suggestions are welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269534
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:06
Start Date: 28/Jun/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506876768
 
 
   Thank You @anuengineer for the review.
   I have committed this to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269534)
Time Spent: 50m  (was: 40m)

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1730
> URL: https://issues.apache.org/jira/browse/HDDS-1730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createDirectory request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1672) Improve locking in OzoneManager

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1672?focusedWorklogId=269531=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269531
 ]

ASF GitHub Bot logged work on HDDS-1672:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:05
Start Date: 28/Jun/19 21:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1016: HDDS-1672. 
Improve locking in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1016#issuecomment-506876653
 
 
   Test failures are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269531)
Time Spent: 10h 20m  (was: 10h 10m)

> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other 
> idea/suggestions are welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1730) Implement File CreateDirectory Request to use Cache and DoubleBuffer

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1730?focusedWorklogId=269533=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269533
 ]

ASF GitHub Bot logged work on HDDS-1730:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:05
Start Date: 28/Jun/19 21:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1026: HDDS-1730. 
Implement File CreateDirectory Request to use Cache and Do…
URL: https://github.com/apache/hadoop/pull/1026#issuecomment-506876768
 
 
   Thank You @anuengineer for the review.
   I have committed this to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269533)
Time Spent: 40m  (was: 0.5h)

> Implement File CreateDirectory Request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1730
> URL: https://issues.apache.org/jira/browse/HDDS-1730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement createDirectory request according to the HA 
> model, and use cache and double buffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1672) Improve locking in OzoneManager

2019-06-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875228#comment-16875228
 ] 

Hudson commented on HDDS-1672:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16837 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16837/])
HDDS-1672. Improve locking in OzoneManager. (#1016) (github: rev 
49c5e8ac249981b533763d1523e72872748e3f79)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (delete) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerLock.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerLock.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/lock/TestOzoneManagerLock.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLockUtil.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java


> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release 

[jira] [Work logged] (HDDS-1672) Improve locking in OzoneManager

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1672?focusedWorklogId=269532=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269532
 ]

ASF GitHub Bot logged work on HDDS-1672:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:05
Start Date: 28/Jun/19 21:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1016: 
HDDS-1672. Improve locking in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1016
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269532)
Time Spent: 10.5h  (was: 10h 20m)

> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other 
> idea/suggestions are welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?focusedWorklogId=269529=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-269529
 ]

ASF GitHub Bot logged work on HDDS-1733:


Author: ASF GitHub Bot
Created on: 28/Jun/19 21:01
Start Date: 28/Jun/19 21:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1031: HDDS-1733. Fix 
Ozone documentation
URL: https://github.com/apache/hadoop/pull/1031#issuecomment-506875643
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 487 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1277 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 435 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 2650 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1031/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1031 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux dcd3b984503a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f02b0e1 |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1031/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 269529)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >