[jira] [Commented] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-06-13 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863704#comment-16863704
 ] 

Lisheng Sun commented on HADOOP-16112:
--

[~jzhuge] [~hexiaoqiao] , Thanks for your comments.

My users hit this issue more once.If  the unexpected trash location generated, 
lead to exist directory is deleted. In fact, this situation don't add timestamp 
to mkdir. This result should be not expected.

If there is a issue, I will fix unit test in which race condition reproduce and 
add a patch.  Please correct me if I am wrong.Thanks.

 

 

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia opened a new pull request #968: HADOOP-16373. Fix typo in FileSystemShell#test documentation

2019-06-13 Thread GitBox
dineshchitlangia opened a new pull request #968: HADOOP-16373. Fix typo in 
FileSystemShell#test documentation
URL: https://github.com/apache/hadoop/pull/968
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a 
http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r293661985
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -426,6 +430,20 @@ public static long getUtcTime() {
 return Calendar.getInstance(UTC_ZONE).getTimeInMillis();
   }
 
+  public static Policy getHttpPolicy(Configuration conf) {
+String policyStr = conf.get("ozone.http.policy", 
OzoneConfigKeys.OZONE_HTTP_POLICY);
 
 Review comment:
   This line get() second param value should be a default value for the 
property.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16373) Fix typo in FileSystemShell#test documentation

2019-06-13 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HADOOP-16373:
--

 Summary: Fix typo in FileSystemShell#test documentation
 Key: HADOOP-16373
 URL: https://issues.apache.org/jira/browse/HADOOP-16373
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.1.2, 2.9.2, 3.2.0, 3.0.0, 2.7.1
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Typo is describing option -d
https://hadoop.apache.org/docs/r3.1.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#test
{code:java}
test
Usage: hadoop fs -test -[defsz] URI

Options:

-d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-z: if the file is zero length, return 0.
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a 
http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r293661655
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -426,6 +430,20 @@ public static long getUtcTime() {
 return Calendar.getInstance(UTC_ZONE).getTimeInMillis();
   }
 
+  public static Policy getHttpPolicy(Configuration conf) {
+String policyStr = conf.get("ozone.http.policy", 
OzoneConfigKeys.OZONE_HTTP_POLICY);
+if(policyStr == null || policyStr.length() == 0) {
+  policyStr = conf.get("dfs.http.policy", 
DFSConfigKeys.DFS_HTTP_POLICY_DEFAULT);
+}
 
 Review comment:
   If HTTP_ONLY is used as the default, 
   we can conf.get("OzoneConfigKeys.OZONE_HTTP_POLICY", 
DFSConfigKeys.DFS_HTTP_POLICY_DEFAULT);
   Then we don't need checks of null, and we can make this logic simple.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for 
readFully; add readFully to ByteBufferPositionedReadable
URL: https://github.com/apache/hadoop/pull/963#issuecomment-501970412
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1156 | trunk passed |
   | +1 | compile | 1122 | trunk passed |
   | +1 | checkstyle | 142 | trunk passed |
   | +1 | mvnsite | 245 | trunk passed |
   | +1 | shadedclient | 1143 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | trunk passed |
   | 0 | spotbugs | 28 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 28 | branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 176 | the patch passed |
   | +1 | compile | 1099 | the patch passed |
   | +1 | cc | 1099 | the patch passed |
   | +1 | javac | 1099 | the patch passed |
   | -0 | checkstyle | 138 | root: The patch generated 2 new + 111 unchanged - 
0 fixed = 113 total (was 111) |
   | +1 | mvnsite | 241 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   | 0 | findbugs | 29 | hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 554 | hadoop-common in the patch passed. |
   | +1 | unit | 118 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 6094 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 420 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 14739 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestSafeMode |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/963 |
   | JIRA Issue | HDFS-14564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 0bdc4594a2ce 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e094b3b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/2/testReport/ |
   | Max. process+thread count | 3092 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method

2019-06-13 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863695#comment-16863695
 ] 

Dinesh Chitlangia commented on HADOOP-16372:


Thanks [~bharatviswa] for filing this issue. I have opened PR 967.

> Fix typo in DFSUtil getHttpPolicy method
> 
>
> Key: HADOOP-16372
> URL: https://issues.apache.org/jira/browse/HADOOP-16372
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Trivial
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a 
http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r293660938
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -426,6 +430,20 @@ public static long getUtcTime() {
 return Calendar.getInstance(UTC_ZONE).getTimeInMillis();
   }
 
+  public static Policy getHttpPolicy(Configuration conf) {
+String policyStr = conf.get("ozone.http.policy", 
OzoneConfigKeys.OZONE_HTTP_POLICY);
+if(policyStr == null || policyStr.length() == 0) {
+  policyStr = conf.get("dfs.http.policy", 
DFSConfigKeys.DFS_HTTP_POLICY_DEFAULT);
+}
+Policy policy = Policy.fromString(policyStr);
+if (policy == null) {
+  throw new HadoopIllegalArgumentException("Unrecognized value '" + 
policyStr + "' for " + "dfs.http.policy");
+} else {
+  conf.set("dfs.http.policy", policy.name());
 
 Review comment:
   Why do we need to set it back?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a 
http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r293660791
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -426,6 +430,20 @@ public static long getUtcTime() {
 return Calendar.getInstance(UTC_ZONE).getTimeInMillis();
   }
 
+  public static Policy getHttpPolicy(Configuration conf) {
+String policyStr = conf.get("ozone.http.policy", 
OzoneConfigKeys.OZONE_HTTP_POLICY);
 
 Review comment:
   Minor NIT: ozone.http.policy -> Use OzoneConfigKeys.OZONE_HTTP_POLICY.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia opened a new pull request #967: HADOOP-16372. Fix typo in DFSUtil getHttpPolicy method

2019-06-13 Thread GitBox
dineshchitlangia opened a new pull request #967: HADOOP-16372. Fix typo in 
DFSUtil getHttpPolicy method
URL: https://github.com/apache/hadoop/pull/967
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-06-13 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863689#comment-16863689
 ] 

He Xiaoqiao commented on HADOOP-16112:
--

[~jzhuge], Thanks for your comments.
{quote}the new unit test passes without any fix, is it valid? I understand race 
condition is hard to reproduce.{quote}
right, new unit test actually does not verify anything, so in my opinion, it is 
not a valid unit test.
I would like to state this case could be interpretable, may be not issue. IMO 
it make sense for me no matter parent or child path add timestamp to mkdir, in 
whatever way, I do not think we could guarantee consistency at the client side 
through retry. Please correct me if something wrong.

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method

2019-06-13 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HADOOP-16372:
--

Assignee: Dinesh Chitlangia

> Fix typo in DFSUtil getHttpPolicy method
> 
>
> Key: HADOOP-16372
> URL: https://issues.apache.org/jira/browse/HADOOP-16372
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Dinesh Chitlangia
>Priority: Trivial
>
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-06-13 Thread John Zhuge (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863683#comment-16863683
 ] 

John Zhuge commented on HADOOP-16112:
-

[~leosun08], thanks for finding and reporting the issue! I can take a look.

Just echo [~hexiaoqiao]'s comment, the new unit test passes without any fix, is 
it valid? I understand race condition is hard to reproduce.

BTW, have you or your users hit this issue often? What was the impact other 
than being surprised by the unexpected trash location?

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #953: YARN-5727. Support visibility semantic in MapReduce.

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #953: YARN-5727. Support visibility semantic in 
MapReduce.
URL: https://github.com/apache/hadoop/pull/953#issuecomment-501954516
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1126 | trunk passed |
   | +1 | compile | 1140 | trunk passed |
   | +1 | checkstyle | 155 | trunk passed |
   | +1 | mvnsite | 344 | trunk passed |
   | +1 | shadedclient | 1189 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 244 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 524 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for patch |
   | +1 | mvninstall | 232 | the patch passed |
   | +1 | compile | 988 | the patch passed |
   | -1 | javac | 988 | root generated 21 new + 1474 unchanged - 0 fixed = 1495 
total (was 1474) |
   | +1 | checkstyle | 149 | root: The patch generated 0 new + 1010 unchanged - 
17 fixed = 1010 total (was 1027) |
   | +1 | mvnsite | 333 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 638 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 227 | the patch passed |
   | -1 | findbugs | 84 | 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 562 | hadoop-common in the patch passed. |
   | +1 | unit | 1264 | hadoop-yarn-server-nodemanager in the patch passed. |
   | -1 | unit | 335 | hadoop-mapreduce-client-core in the patch failed. |
   | -1 | unit | 68 | hadoop-mapreduce-client-common in the patch failed. |
   | +1 | unit | 584 | hadoop-mapreduce-client-app in the patch passed. |
   | -1 | unit | 3966 | hadoop-mapreduce-client-jobclient in the patch failed. |
   | +1 | unit | 911 | hadoop-gridmix in the patch passed. |
   | -1 | asflicense | 48 | The patch generated 15 ASF License warnings. |
   | | | 15525 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | 
module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
 |
   |  |  Suspicious comparison of Boolean references in 
org.apache.hadoop.mapreduce.Job.appendSharedCacheUploadPolicies(Configuration, 
Map, boolean)  At Job.java:in 
org.apache.hadoop.mapreduce.Job.appendSharedCacheUploadPolicies(Configuration, 
Map, boolean)  At Job.java:[line 1507] |
   |  |  
org.apache.hadoop.mapreduce.Job.appendSharedCacheUploadPolicies(Configuration, 
Map, boolean) concatenates strings using + in a loop  At Job.java:using + in a 
loop  At Job.java:[line 1511] |
   |  |  Should org.apache.hadoop.mapreduce.JobResourceUploader$MRResourceInfo 
be a _static_ inner class?  At JobResourceUploader.java:inner class?  At 
JobResourceUploader.java:[lines 77-145] |
   | Failed junit tests | hadoop.mapreduce.TestJobResourceUploader |
   |   | hadoop.mapreduce.v2.util.TestMRApps |
   |   | hadoop.mapred.TestTextOutputFormat |
   |   | hadoop.mapreduce.lib.input.TestNLineInputFormat |
   |   | hadoop.mapred.TestUserDefinedCounters |
   |   | hadoop.mapred.TestMapProgress |
   |   | hadoop.mapred.TestReduceFetch |
   |   | hadoop.mapreduce.lib.map.TestMultithreadedMapper |
   |   | hadoop.mapreduce.lib.join.TestJoinDatamerge |
   |   | hadoop.mapred.lib.aggregate.TestAggregates |
   |   | hadoop.mapred.TestJobCleanup |
   |   | hadoop.mapred.TestComparators |
   |   | hadoop.fs.TestDFSIO |
   |   | hadoop.mapreduce.lib.chain.TestMapReduceChain |
   |   | hadoop.mapreduce.v2.TestMRJobsWithHistoryService |
   |   | hadoop.mapred.TestKeyValueTextInputFormat |
   |   | hadoop.mapreduce.lib.input.TestCombineFileInputFormat |
   |   | hadoop.mapred.TestLocalMRNotification |
   |   | hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat |
   |   | hadoop.mapred.TestYARNRunner |
   |   | hadoop.mapred.TestFileInputFormatPathFilter |
   |   | hadoop.mapreduce.security.TestMRCredentials |
   |   | hadoop.mapred.TestMultiFileInputFormat |
   |   | hadoop.mapred.TestMRCJCFileOutputCommitter |
   |   | hadoop.mapreduce.v2.TestMROldApiJobs |
   |   | hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat |
   |   | hadoop.mapred.TestMapRed |
   |   | hadoop.mapreduce.lib.input.TestMultipleInputs |
   |   | hadoop.mapred.TestFileOutputFormat |
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio…

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #966: HDDS-1686. Remove check to get from 
openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966#issuecomment-501946046
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 531 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 453 | the patch passed |
   | +1 | compile | 286 | the patch passed |
   | +1 | javac | 286 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 551 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 177 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1503 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 69 | The patch does not generate ASF License warnings. |
   | | | 6790 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/966 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b96aa24b3c9c 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e094b3b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/1/testReport/ |
   | Max. process+thread count | 4322 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-966/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-13 Thread GitBox
bharatviswa504 commented on issue #965: HDDS-1684. OM should create Ratis 
related dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965#issuecomment-501937432
 
 
   Test failures look related to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related 
dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965#issuecomment-501934652
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 634 | trunk passed |
   | +1 | compile | 290 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 917 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 355 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 558 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | compile | 294 | the patch passed |
   | +1 | javac | 294 | the patch passed |
   | +1 | checkstyle | 104 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | the patch passed |
   | +1 | findbugs | 714 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 205 | hadoop-hdds in the patch failed. |
   | -1 | unit | 3284 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 8941 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/965 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5d714c9f13db 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bcfd228 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/1/testReport/ |
   | Max. process+thread count | 4337 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-16369:
--
   Resolution: Fixed
Fix Version/s: 3.1.3
   2.9.3
   3.2.1
   3.3.0
   3.0.4
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.x, branch-2, branch-2.9.

Thanks for the contribution, [~jeagles]. Thanks for your review, [~Jim_Brennan].

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief with the zts confusion in the code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863588#comment-16863588
 ] 

Hudson commented on HADOOP-16369:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16742 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16742/])
HADOOP-16369. Fix zstandard shortname misspelled as zts. Contributed by 
(tasanuma: rev 54f9f75a443d7d167a7aa7d04a87e3f5af049887)
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c


> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief with the zts confusion in the code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863586#comment-16863586
 ] 

Takanobu Asanuma commented on HADOOP-16369:
---

+1. Will commit it.

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief with the zts confusion in the code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16372) Fix typo in DFSUtil getHttpPolicy method

2019-06-13 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-16372:
---

 Summary: Fix typo in DFSUtil getHttpPolicy method
 Key: HADOOP-16372
 URL: https://issues.apache.org/jira/browse/HADOOP-16372
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1479]

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #964: HDDS-1675. Cleanup Volume Request 2 phase old code.

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #964: HDDS-1675. Cleanup Volume Request 2 phase 
old code.
URL: https://github.com/apache/hadoop/pull/964#issuecomment-501929705
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 547 | trunk passed |
   | +1 | compile | 324 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1042 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 197 | trunk passed |
   | 0 | spotbugs | 351 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 579 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | compile | 299 | the patch passed |
   | +1 | cc | 299 | the patch passed |
   | +1 | javac | 299 | the patch passed |
   | -0 | checkstyle | 58 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 753 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 95 | hadoop-ozone generated 2 new + 9 unchanged - 0 fixed = 
11 total (was 9) |
   | +1 | findbugs | 592 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 194 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1427 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7211 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/964 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9d33cd74d91e 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bcfd228 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/testReport/ |
   | Max. process+thread count | 4454 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-964/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio…

2019-06-13 Thread GitBox
bharatviswa504 opened a new pull request #966: HDDS-1686. Remove check to get 
from openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966
 
 
   …n for Keys.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-13 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863558#comment-16863558
 ] 

Greg Senia commented on HADOOP-16350:
-

And I found why I made the change in the 3.x code line is because in 
DFSClient.java the serverdefaults are passed in with no way to override them 
hence the need for the custom property. By going and fetching serverDefaults 
you automatically have a value from the NN.  
{code:java}
/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
  

@Override

  public URI getKeyProviderUri() throws IOException {

    return HdfsKMSUtil.getKeyProviderUri(ugi, namenodeUri,

        getServerDefaults().getKeyProviderUri(), conf);

  }
{code}
 And even if you attempt to override it in 
/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
  
 isHdfsEncryptionEnabled it will call the above method and return a URI
{code:java}
  /**
   * Probe for encryption enabled on this filesystem.
   * @return true if encryption is enabled
   */
  boolean isHDFSEncryptionEnabled() throws IOException {
return getKeyProviderUri() != null;
  }{code}
 

Hence the following code changes below in: 
{code:java}
hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java

  public static URI getKeyProviderUri(UserGroupInformation ugi,
  URI namenodeUri, String keyProviderUriStr, Configuration conf)
  throws IOException {
URI keyProviderUri = null;
// Lookup the secret in credentials object for namenodeuri.
Credentials credentials = ugi.getCredentials();
Text credsKey = getKeyProviderMapKey(namenodeUri);
byte[] keyProviderUriBytes =
credentials.getSecretKey(credsKey);
if(keyProviderUriBytes != null) {
  keyProviderUri =
  URI.create(DFSUtilClient.bytes2String(keyProviderUriBytes));
}
if (keyProviderUri == null) {
  // NN is old and doesn't report provider, so use conf.
  if (keyProviderUriStr == null) {
keyProviderUri = KMSUtil.getKeyProviderUri(conf, keyProviderUriKeyName);
  } else if (!keyProviderUriStr.isEmpty()) {
// Check if KMS Traffic to Remote KMS Server is allowed. Default is 
allowed
Boolean isRemoteKMSAllowed = 

conf.getBoolean(CommonConfigurationKeysPublic.KMS_CLIENT_ALLOW_REMOTE_KMS, 

CommonConfigurationKeysPublic.KMS_CLIENT_ALLOW_REMOTE_KMS_DEFAULT);
if (isRemoteKMSAllowed) { 
  keyProviderUri = URI.create(keyProviderUriStr);
}
  }
  if (keyProviderUri != null) {
credentials.addSecretKey(
credsKey, DFSUtilClient.string2Bytes(keyProviderUri.toString()));
  }
}
return keyProviderUri;
  }
{code}
 

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they 

[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863555#comment-16863555
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] It seems like some redundancy in the logic, is this the same?

{code}
Set defaultInitializers = new LinkedHashSet();
if (!initializers.contains(
ProxyUserAuthenticationFilterInitializer.class.getName())) {
  defaultInitializers.add(
  ProxyUserAuthenticationFilterInitializer.class.getName());
}
defaultInitializers.add(
TimelineReaderWhitelistAuthorizationFilterInitializer.class.getName());
 {code}

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-13 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863543#comment-16863543
 ] 

Greg Senia edited comment on HADOOP-16350 at 6/13/19 11:51 PM:
---

[~szetszwo] the recommended suggestion won't work with the 2.x line of code. 
The custom property is required.. I think that is where the confusion is coming 
in here. In Hadoop 2.x code is much different than 3.x I will review 3.x code 
again

 

2.x code:
{code:java}
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java

+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java

@@ -3670,12 +3670,17 @@

     }

 

     // Query the namenode for the key provider uri.

+    Boolean isRemoteKMSAllowed = 

+            
conf.getBoolean(CommonConfigurationKeysPublic.KMS_CLIENT_ALLOW_REMOTE_KMS, 

+                    
CommonConfigurationKeysPublic.KMS_CLIENT_ALLOW_REMOTE_KMS_DEFAULT);

+    if (isRemoteKMSAllowed) {

     FsServerDefaults serverDefaults = getServerDefaults();

-    if (serverDefaults.getKeyProviderUri() != null) {

-      if (!serverDefaults.getKeyProviderUri().isEmpty()) {

-        keyProviderUri = URI.create(serverDefaults.getKeyProviderUri());

+      if (serverDefaults.getKeyProviderUri() != null) {

+        if (!serverDefaults.getKeyProviderUri().isEmpty()) {

+          keyProviderUri = URI.create(serverDefaults.getKeyProviderUri());

+        }

+        return keyProviderUri;

       }

-      return keyProviderUri;

     }

 

     // Last thing is to trust its own conf to be backwards compatible.
{code}
 Failure:
{code:java}
[gss2002@ha21t51en ~]$ hadoop distcp -Dhadoop.security.key.provider.path="" 
-Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit 
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt 
19/06/13 19:22:58 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
sourceFileListing=null, 
sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
 
targetPath=hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
 targetPathExists=true, filtersFile='null', verboseLog=false} 19/06/13 19:22:59 
INFO client.AHSProxy: Connecting to Application History server at 
ha21t53mn.tech.hdp.example.com/10.70.33.2:10200 19/06/13 19:22:59 INFO 
hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 561611 for gss2002 on 
ha-hdfs:tech 19/06/13 19:22:59 INFO security.TokenCache: Got dt for 
hdfs://tech; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: 
(HDFS_DELEGATION_TOKEN token 561611 for gss2002) 19/06/13 19:22:59 INFO 
security.TokenCache: Got dt for hdfs://tech; Kind: kms-dt, Service: 
ha21t53en.tech.hdp.example.com:9292, Ident: (owner=gss2002, renewer=yarn, 
realUser=, issueDate=1560468179680, maxDate=1561072979680, sequenceNumber=7787, 
masterKeyId=92) 19/06/13 19:23:00 INFO tools.SimpleCopyListing: Paths 
(files+dirs) cnt = 1; dirCnt = 0 19/06/13 19:23:00 INFO 
tools.SimpleCopyListing: Build file listing completed. 19/06/13 19:23:00 INFO 
tools.DistCp: Number of paths in the copy list: 1 19/06/13 19:23:01 INFO 
tools.DistCp: Number of paths in the copy list: 1 19/06/13 19:23:01 INFO 
client.AHSProxy: Connecting to Application History server at 
ha21t53mn.tech.hdp.example.com/10.70.33.2:10200 19/06/13 19:23:01 INFO 
hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5144031 for gss2002 on 
ha-hdfs:unit 19/06/13 19:23:01 ERROR tools.DistCp: Exception encountered 
java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
unreachable) at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)
 at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2407)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:140)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 at 
org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:124)
 at 

[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-501916727
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 19 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 603 | trunk passed |
   | +1 | compile | 317 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1015 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 355 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 571 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 487 | the patch passed |
   | +1 | compile | 296 | the patch passed |
   | +1 | cc | 296 | the patch passed |
   | +1 | javac | 296 | the patch passed |
   | -0 | checkstyle | 45 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | the patch passed |
   | +1 | findbugs | 626 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 204 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1414 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 62 | The patch does not generate ASF License warnings. |
   | | | 7171 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux efdf2997822b 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bcfd228 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/testReport/ |
   | Max. process+thread count | 4793 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #963: HDFS-14564: Add libhdfs APIs for 
readFully; add readFully to ByteBufferPositionedReadable
URL: https://github.com/apache/hadoop/pull/963#issuecomment-501914054
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 58 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 84 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1248 | trunk passed |
   | +1 | compile | 1067 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 238 | trunk passed |
   | +1 | shadedclient | 1159 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 27 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 27 | branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 165 | the patch passed |
   | +1 | compile | 986 | the patch passed |
   | -1 | cc | 986 | root generated 5 new + 9 unchanged - 0 fixed = 14 total 
(was 9) |
   | +1 | javac | 986 | the patch passed |
   | -0 | checkstyle | 146 | root: The patch generated 2 new + 112 unchanged - 
0 fixed = 114 total (was 112) |
   | +1 | mvnsite | 232 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 750 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | 0 | findbugs | 26 | hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 569 | hadoop-common in the patch passed. |
   | +1 | unit | 132 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 6127 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 420 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 14816 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.tools.TestDFSZKFailoverController |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/963 |
   | JIRA Issue | HDFS-14564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9f332136cdf1 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bcfd228 |
   | Default Java | 1.8.0_212 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/artifact/out/diff-compile-cc-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/testReport/ |
   | Max. process+thread count | 3244 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-963/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-13 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863543#comment-16863543
 ] 

Greg Senia commented on HADOOP-16350:
-

[~szetszwo] the recommended suggestion won't work. The custom property is 
required.. I know you dont want to add it for whatever reason but a feature 
that was being used in a commercial product is no longer working so we really 
need this custom property added to allow HADOOP-14104 to be reverted.. See 
below setting your recommendation fails!!

 
{code:java}
[gss2002@ha21t51en ~]$ hadoop distcp -Dhadoop.security.key.provider.path="" 
-Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit 
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt 
19/06/13 19:22:58 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
sourceFileListing=null, 
sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
 
targetPath=hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
 targetPathExists=true, filtersFile='null', verboseLog=false} 19/06/13 19:22:59 
INFO client.AHSProxy: Connecting to Application History server at 
ha21t53mn.tech.hdp.example.com/10.70.33.2:10200 19/06/13 19:22:59 INFO 
hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 561611 for gss2002 on 
ha-hdfs:tech 19/06/13 19:22:59 INFO security.TokenCache: Got dt for 
hdfs://tech; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:tech, Ident: 
(HDFS_DELEGATION_TOKEN token 561611 for gss2002) 19/06/13 19:22:59 INFO 
security.TokenCache: Got dt for hdfs://tech; Kind: kms-dt, Service: 
ha21t53en.tech.hdp.example.com:9292, Ident: (owner=gss2002, renewer=yarn, 
realUser=, issueDate=1560468179680, maxDate=1561072979680, sequenceNumber=7787, 
masterKeyId=92) 19/06/13 19:23:00 INFO tools.SimpleCopyListing: Paths 
(files+dirs) cnt = 1; dirCnt = 0 19/06/13 19:23:00 INFO 
tools.SimpleCopyListing: Build file listing completed. 19/06/13 19:23:00 INFO 
tools.DistCp: Number of paths in the copy list: 1 19/06/13 19:23:01 INFO 
tools.DistCp: Number of paths in the copy list: 1 19/06/13 19:23:01 INFO 
client.AHSProxy: Connecting to Application History server at 
ha21t53mn.tech.hdp.example.com/10.70.33.2:10200 19/06/13 19:23:01 INFO 
hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 5144031 for gss2002 on 
ha-hdfs:unit 19/06/13 19:23:01 ERROR tools.DistCp: Exception encountered 
java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
unreachable) at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)
 at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2407)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:140)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 at 
org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:124)
 at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266) 
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at 
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at 
org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:193) at 
org.apache.hadoop.tools.DistCp.execute(DistCp.java:155) at 
org.apache.hadoop.tools.DistCp.run(DistCp.java:128) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
org.apache.hadoop.tools.DistCp.main(DistCp.java:462) Caused by: 
java.net.NoRouteToHostException: No route to host (Host unreachable) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at 

[GitHub] [hadoop] bharatviswa504 merged pull request #961: HDDS-1680. Create missing parent directories during the creation of HddsVolume dirs

2019-06-13 Thread GitBox
bharatviswa504 merged pull request #961: HDDS-1680. Create missing parent 
directories during the creation of HddsVolume dirs
URL: https://github.com/apache/hadoop/pull/961
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #958: HDDS-1677. Auditparser robot test shold use a world writable working directory

2019-06-13 Thread GitBox
bharatviswa504 merged pull request #958: HDDS-1677. Auditparser robot test 
shold use a world writable working directory
URL: https://github.com/apache/hadoop/pull/958
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #958: HDDS-1677. Auditparser robot test shold use a world writable working directory

2019-06-13 Thread GitBox
bharatviswa504 commented on issue #958: HDDS-1677. Auditparser robot test shold 
use a world writable working directory
URL: https://github.com/apache/hadoop/pull/958#issuecomment-501909379
 
 
   Thank You @elek for the fix.
   I have committed this to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293606893
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Stop the ratis server so that no new transactions are applied. This
+// can happen if a leader election happens while the state is being
+// re-initialized.
+omRatisServer.stop();
 
 Review comment:
   One risk is that this code path for stop may not be well tested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293607099
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Stop the ratis server so that no new transactions are applied. This
+// can happen if a leader election happens while the state is being
+// re-initialized.
+omRatisServer.stop();
+
+// Clear the OM Double Buffer so that if there are any pending
+// transactions in the buffer, they are discarded.
+omDoubleBuffer.stop();
 
 Review comment:
   `omDoubleBuffer.stop` interrupts the thread but does not call `join`. So the 
doubleBuffer thread may still be running when the stop call returns.
   
   We should probably fix stop to call `join`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293607863
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Stop the ratis server so that no new transactions are applied. This
+// can happen if a leader election happens while the state is being
+// re-initialized.
+omRatisServer.stop();
+
+// Clear the OM Double Buffer so that if there are any pending
+// transactions in the buffer, they are discarded.
+omDoubleBuffer.stop();
+
+// Take a backup of the current DB
+File dbFile = metadataManager.getStore().getDbLocation();
+String dbBackupFileName = OzoneConsts.OM_DB_BACKUP_PREFIX +
+lastAppliedIndex + "_" + System.currentTimeMillis();
+File dbBackupFile = new File(dbFile.getParentFile(), dbBackupFileName);
+
+try {
+  Files.move(dbFile.toPath(), dbBackupFile.toPath());
+} catch (IOException e) {
+  LOG.error("Failed to create a backup of the current DB. Aborting " +
+  "snapshot installation.", e);
+  return null;
+}
+
+// Move the downloaded DB checkpoint into the om metadata dir
+Path checkpointPath = omDBcheckpoint.getCheckpointLocation();
+try {
+  Files.move(checkpointPath, dbFile.toPath());
+} catch (IOException e) {
+  LOG.error("Failed to move downloaded DB checkpoint {} to metadata " +
+  "directory {}",checkpointPath, dbFile.toPath(), e);
+  return null;
+}
+
+// Reload the OM DB store with the new checkpoint
+try {
+  reloadOMState();
+} catch (IOException e) {
+  LOG.error("Failed to reload OM state with new DB checkpoint.", e);
+  return null;
+}
+
+// TODO: We should only return the snpashotIndex to the leader.
+//  Fixed after RATIS-586
+TermIndex newTermIndex = TermIndex.newTermIndex(0,
+checkpointSnapshotIndex);
+
+return newTermIndex;
+  }
+
+  /**
+   * Re-instantiate MetadataManager with new DB checkpoint.
+   * All the classes which use/ store MetadataManager should also be updated
+   * with the new MetadataManager instance.
+   */
+  private void reloadOMState() throws IOException {
+
+metadataManager = new OmMetadataManagerImpl(configuration);
+
+metadataManager.start(configuration);
+
+// Set metrics and start metrics back ground thread
+metrics.setNumVolumes(metadataManager.countRowsInTable(metadataManager
+.getVolumeTable()));
+metrics.setNumBuckets(metadataManager.countRowsInTable(metadataManager
+.getBucketTable()));
+
+// Delete the omMetrics file if it exists
+Files.deleteIfExists(getMetricsStorageFile().toPath());
+
+// Re-initialize metadataManager dependent implementations
 
 Review comment:
   Can some of this code be shared with startup initialization by moving to a 
common function?


This is an automated 

[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293607289
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Stop the ratis server so that no new transactions are applied. This
+// can happen if a leader election happens while the state is being
+// re-initialized.
+omRatisServer.stop();
+
+// Clear the OM Double Buffer so that if there are any pending
+// transactions in the buffer, they are discarded.
+omDoubleBuffer.stop();
+
+// Take a backup of the current DB
+File dbFile = metadataManager.getStore().getDbLocation();
 
 Review comment:
   This is going to be a directory correct? Since I assume a DB is multiple 
files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293606799
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
 
 Review comment:
   How do we recover from this situation eventually? Should we retry fetching a 
more recent snapshot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293607463
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
+} catch (IOException e) {
+  LOG.error("Failed to download checkpoint from OM leader {}", leaderId, 
e);
+  return null;
+}
+
+// Check if current ratis log index is smaller than the downloaded
+// snapshot index. If yes, proceed by stopping the ratis server so that
+// the OM state can be re-initialized. If no, then do not proceed with
+// installSnapshot.
+long lastAppliedIndex = omRatisServer.getStateMachineLastAppliedIndex();
+long checkpointSnapshotIndex = omDBcheckpoint.getRatisSnapshotIndex();
+if (checkpointSnapshotIndex <= lastAppliedIndex) {
+  LOG.error("Failed to install checkpoint from OM leader: {}. The last " +
+  "applied index: {} is greater than or equal to the checkpoint's " +
+  "snapshot index: {}", leaderId, lastAppliedIndex,
+  checkpointSnapshotIndex);
+  return null;
+}
+
+// Stop the ratis server so that no new transactions are applied. This
+// can happen if a leader election happens while the state is being
+// re-initialized.
+omRatisServer.stop();
+
+// Clear the OM Double Buffer so that if there are any pending
+// transactions in the buffer, they are discarded.
+omDoubleBuffer.stop();
+
+// Take a backup of the current DB
+File dbFile = metadataManager.getStore().getDbLocation();
+String dbBackupFileName = OzoneConsts.OM_DB_BACKUP_PREFIX +
+lastAppliedIndex + "_" + System.currentTimeMillis();
+File dbBackupFile = new File(dbFile.getParentFile(), dbBackupFileName);
+
+try {
+  Files.move(dbFile.toPath(), dbBackupFile.toPath());
+} catch (IOException e) {
+  LOG.error("Failed to create a backup of the current DB. Aborting " +
+  "snapshot installation.", e);
+  return null;
+}
+
+// Move the downloaded DB checkpoint into the om metadata dir
+Path checkpointPath = omDBcheckpoint.getCheckpointLocation();
+try {
+  Files.move(checkpointPath, dbFile.toPath());
+} catch (IOException e) {
+  LOG.error("Failed to move downloaded DB checkpoint {} to metadata " +
+  "directory {}",checkpointPath, dbFile.toPath(), e);
+  return null;
+}
+
+// Reload the OM DB store with the new checkpoint
+try {
+  reloadOMState();
+} catch (IOException e) {
+  LOG.error("Failed to reload OM state with new DB checkpoint.", e);
+  return null;
+}
+
+// TODO: We should only return the snpashotIndex to the leader.
 
 Review comment:
   I didn't understand this TODO. Could you clarify a bit more?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #744: HDDS-1400. Convert all OM Key related operations to HA model.

2019-06-13 Thread GitBox
bharatviswa504 commented on issue #744: HDDS-1400. Convert all OM Key related 
operations to HA model.
URL: https://github.com/apache/hadoop/pull/744#issuecomment-501908310
 
 
   Closing this. Code changes according to the new design are handled in 
HDDS-1638.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #744: HDDS-1400. Convert all OM Key related operations to HA model.

2019-06-13 Thread GitBox
bharatviswa504 closed pull request #744: HDDS-1400. Convert all OM Key related 
operations to HA model.
URL: https://github.com/apache/hadoop/pull/744
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru opened a new pull request #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-13 Thread GitBox
hanishakoneru opened a new pull request #965: HDDS-1684. OM should create Ratis 
related dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #964: HDDS-1675. Cleanup Volume Request 2 phase old code.

2019-06-13 Thread GitBox
bharatviswa504 opened a new pull request #964: HDDS-1675. Cleanup Volume 
Request 2 phase old code.
URL: https://github.com/apache/hadoop/pull/964
 
 
   This PR is to clean up old 2 phase HA code for Volume requests.
   https://issues.apache.org/jira/browse/HDDS-1379
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r293602190
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 } catch (IOException ioEx) {
 
 Review comment:
   Adding total count to response and supporting "start" query param is tracked 
via https://issues.apache.org/jira/browse/HDDS-1685 and will be implemented 
soon.
   
   This PR only adds "limit" support to the APIs. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mackrorysd commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
mackrorysd commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293599503
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -1134,16 +1147,19 @@ public int run(String[] args, PrintStream out)
   }
   String s3Path = paths.get(0);
   CommandFormat commands = getCommandFormat();
+  URI fsURI = toUri(s3Path);
 
   // check if UNGUARDED_FLAG is passed and use NullMetadataStore in
   // config to avoid side effects like creating the table if not exists
+  Configuration conf0 = getConf();
 
 Review comment:
   Please rename this to unguardedConf or something like that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293599252
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
+   * If the download checkpoints snapshot index is greater than this OM's
+   * last applied transaction index, then re-initialize the OM state via this
+   * checkpoint. Before re-initializing OM state, the OM Ratis server should
+   * be stopped so that no new transactions can be applied.
+   * @param leaderId peerNodeID of the leader OM
+   * @return If checkpoint is installed, return the corresponding termIndex.
+   * Otherwise, return null.
+   */
+  public TermIndex installSnapshot(String leaderId) {
+if (omSnapshotProvider == null) {
+  LOG.error("OM Snapshot Provider is not configured as there are no peer " 
+
+  "nodes.");
+  return null;
+}
+
+DBCheckpoint omDBcheckpoint;
+try {
+  omDBcheckpoint = omSnapshotProvider.getOzoneManagerDBSnapshot(leaderId);
 
 Review comment:
   One question: currently we are passing `leaderId` = null. So will this call 
to `getOzoneManagerDBSnapshot` fail?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-13 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863505#comment-16863505
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16350:
--

{code}
hadoop distcp -Dhadoop.security.kms.client.allow.remote.kms=false 
-Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit 
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt
{code}
[~gss2002], I understand that your patch is working but it is not a good idea 
to add a new conf.  You may already have feeling that Hadoop has too many confs!

With [my 
suggestion|https://issues.apache.org/jira/browse/HADOOP-16350?focusedCommentId=16862537=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862537],
 the distcp command will looks like
- hadoop distcp *-Dhadoop.security.key.provider.path=""* 
-Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=unit 
hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
hdfs://unit/processed/public/opendata/samples/distcp_test/distcp_file2.txt


> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO 

[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293598965
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -3122,6 +3136,131 @@ public boolean setAcl(OzoneObj obj, List 
acls) throws IOException {
 }
   }
 
+  /**
+   * Download and install latest checkpoint from leader OM.
 
 Review comment:
   Again thanks for the great method javadocs! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293598569
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -171,6 +168,22 @@ public long takeSnapshot() throws IOException {
 return 0;
   }
 
+  /**
+   * Leader OM has purged entries from its log. To catch up, OM must download
 
 Review comment:
   Thanks for adding descriptive javadocs to methods. It really makes code 
review much easier!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293597766
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -171,6 +168,22 @@ public long takeSnapshot() throws IOException {
 return 0;
   }
 
+  /**
+   * Leader OM has purged entries from its log. To catch up, OM must download
+   * the latest checkpoint from the leader OM and install it.
+   * @param firstTermIndexInLog TermIndex of the first append entry available
+   *   in the Leader's log.
+   * @return the last term index included in the installed snapshot.
+   */
+  public CompletableFuture notifyInstallSnapshotFromLeader(
+  TermIndex firstTermIndexInLog) {
+// TODO: Raft server should send the leaderId
+String leaderId = null;
+CompletableFuture future = CompletableFuture
+.supplyAsync(() -> ozoneManager.installSnapshot(leaderId));
 
 Review comment:
   We should not execute this in the default ForkJoinPool. That can suffer from 
thread exhaustion/deadlock issues since there are very few threads in the 
default pool.
   
   Instead use the overload of `supplyAsync` that accepts an `Executor`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-13 Thread GitBox
arp7 commented on a change in pull request #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#discussion_r293597419
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -171,6 +168,22 @@ public long takeSnapshot() throws IOException {
 return 0;
   }
 
+  /**
+   * Leader OM has purged entries from its log. To catch up, OM must download
+   * the latest checkpoint from the leader OM and install it.
+   * @param firstTermIndexInLog TermIndex of the first append entry available
+   *   in the Leader's log.
+   * @return the last term index included in the installed snapshot.
+   */
+  public CompletableFuture notifyInstallSnapshotFromLeader(
 
 Review comment:
   Please add the `@Override` annotation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r293584974
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 } catch (IOException ioEx) {
 
 Review comment:
   If we have support for continuation, in the next call we shall not require 
to fetch limit + 50, we can fetch next 50.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-13 Thread GitBox
bharatviswa504 commented on issue #956: HDDS-1638.  Implement Key Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-501883650
 
 
   Added tests for the classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-06-13 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863467#comment-16863467
 ] 

Siyao Meng commented on HADOOP-16156:
-

+1 on rev 004. Thanks [~shwetayakkali]!

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch, 
> HADOOP-16156.003.patch, HADOOP-16156.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863456#comment-16863456
 ] 

Jim Brennan commented on HADOOP-16369:
--

I'm +1 on this (non-binding).  Lysdexics untie!


> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief with the zts confusion in the code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
vivekratnavel commented on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-501873419
 
 
   The acceptance and unit test failures are unrelated to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on issue #959: HDDS-1678. Default image name for kubernetes examples should be ozone and not hadoop

2019-06-13 Thread GitBox
eyanghwx commented on issue #959: HDDS-1678. Default image name for kubernetes 
examples should be ozone and not hadoop
URL: https://github.com/apache/hadoop/pull/959#issuecomment-501864211
 
 
   The patch doesn't appear to correct docker image names in k8s yaml files.  
Something wrong in the pull request?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-501860256
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 32 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1117 | trunk passed |
   | +1 | compile | 1046 | trunk passed |
   | +1 | checkstyle | 138 | trunk passed |
   | +1 | mvnsite | 125 | trunk passed |
   | +1 | shadedclient | 944 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 97 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 188 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 989 | the patch passed |
   | +1 | javac | 989 | the patch passed |
   | -0 | checkstyle | 143 | root: The patch generated 11 new + 100 unchanged - 
2 fixed = 111 total (was 102) |
   | +1 | mvnsite | 120 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 105 | the patch passed |
   | +1 | findbugs | 189 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 496 | hadoop-common in the patch passed. |
   | +1 | unit | 292 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6959 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 61d3cd8d79bb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bcfd228 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/7/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/7/testReport/ |
   | Max. process+thread count | 1463 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #952: HADOOP-16729 out of band deletes

2019-06-13 Thread GitBox
bgaborg commented on issue #952: HADOOP-16729 out of band deletes
URL: https://github.com/apache/hadoop/pull/952#issuecomment-501854987
 
 
   My test results:
   
   **local**: error seems unrelated
   ```
   [ERROR] Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
23.849 s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
   [ERROR] 
testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 4.18 s  <<< ERROR!
   org.apache.hadoop.fs.PathIOException: `gabota-versioned-bucket-ireland': 
Cannot delete root path: s3a://gabota-versioned-bucket-ireland/
   at 
org.apache.hadoop.fs.s3a.S3AFileSystem.rejectRootDirectoryDelete(S3AFileSystem.java:2184)
   at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:2109)
   at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:2058)
   at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:116)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
   at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
   at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
   at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
   at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at java.lang.Thread.run(Thread.java:748)
   ```
   
   **dynamo**: testMRJob failure is known, testDynamoTableTagging failed for me 
the first time. unrelated.
   ```
 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
76.635 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.commit.staging.integration.ITestDirectoryCommitMRJob
   [ERROR] 
testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITestDirectoryCommitMRJob)
  Time elapsed: 45.052 s  <<< ERROR!
   java.io.FileNotFoundException: Path 
s3a://gabota-versioned-bucket-ireland/fork-0003/test/DELAY_LISTING_ME/testMRJob 
is recorded as deleted by S3Guard
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2479)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2450)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:559)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertIsDirectory(AbstractFSContractTestBase.java:327)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitMRJob.testMRJob(AbstractITCommitMRJob.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
 [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 396.307 s <<< FAILURE! - in 

[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8

2019-06-13 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863400#comment-16863400
 ] 

Sahil Takiar commented on HADOOP-16371:
---

I'm thinking we can use most of the changes from HADOOP-16050, but do some 
refactoring so that Wildfly-OpenSSL is an option for ABFS, but not S3A. I think 
it should be okay to disable GCM by default, but there should be an option to 
add it back in (e.g. the S3A default is {{DEFAULT_JSSE_NO_GCM}} and the option 
{{DEFAULT_JSSE}} is just vanilla JSSE with all the default ciphers enabled).

> Option to disable GCM for SSL connections when running on Java 8
> 
>
> Key: HADOOP-16371
> URL: https://issues.apache.org/jira/browse/HADOOP-16371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> This was the original objective of HADOOP-16050. HADOOP-16050 was changed to 
> mimic HADOOP-15669 and added (or attempted to add) support for 
> Wildfly-OpenSSL in S3A.
> Due to the number of issues have seen with S3A + WildFly OpenSSL (see 
> HADOOP-16346), HADOOP-16050 was reverted.
> As shown in the description of HADOOP-16050, and the analysis done in 
> HADOOP-15669, GCM has major performance issues when running on Java 8. 
> Removing it from the list of available ciphers can drastically improve 
> performance, perhaps not as much as using OpenSSL, but still a considerable 
> amount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8

2019-06-13 Thread Sahil Takiar (JIRA)
Sahil Takiar created HADOOP-16371:
-

 Summary: Option to disable GCM for SSL connections when running on 
Java 8
 Key: HADOOP-16371
 URL: https://issues.apache.org/jira/browse/HADOOP-16371
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Sahil Takiar
Assignee: Sahil Takiar


This was the original objective of HADOOP-16050. HADOOP-16050 was changed to 
mimic HADOOP-15669 and added (or attempted to add) support for Wildfly-OpenSSL 
in S3A.

Due to the number of issues have seen with S3A + WildFly OpenSSL (see 
HADOOP-16346), HADOOP-16050 was reverted.

As shown in the description of HADOOP-16050, and the analysis done in 
HADOOP-15669, GCM has major performance issues when running on Java 8. Removing 
it from the list of available ciphers can drastically improve performance, 
perhaps not as much as using OpenSSL, but still a considerable amount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-16369:
-
Description: A few times in the code base zstd was misspelled as ztsd. zts 
is another library https://github.com/yahoo/athenz/tree/master/clients/java/zts 
and has caused some grief with the zts confusion in the code base  (was: A few 
times in the code base zstd was misspelled as ztsd. zts is another library 
https://github.com/yahoo/athenz/tree/master/clients/java/zts and has caused 
some grief.)

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief with the zts confusion in the code base



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar opened a new pull request #963: HDFS-14564: Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-06-13 Thread GitBox
sahilTakiar opened a new pull request #963: HDFS-14564: Add libhdfs APIs for 
readFully; add readFully to ByteBufferPositionedReadable
URL: https://github.com/apache/hadoop/pull/963
 
 
   [HDFS-14564](https://issues.apache.org/jira/browse/HDFS-14564): Add libhdfs 
APIs for readFully; add readFully to ByteBufferPositionedReadable
   
   * Adds `readFully` to `ByteBufferPositionedReadable` and exposes it via 
libhdfs
   * Exposes `PositionedReadable#readFully` via libhdfs
   * Like `hdfsPread` and `hdfsRead` if the underlying stream `ByteBuffer` 
reads, the `ByteBuffer` APIs will you used
   * Add unit tests, and did a bit of javadoc / code cleanup


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
vivekratnavel commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r293530930
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 } catch (IOException ioEx) {
 
 Review comment:
   @bharatviswa504 This PR only supports limit param and I don't see the need 
for skip param support in the near feature. I will explain how the UI will 
consume this API to show containers and keys to the users. UI will fetch this 
API with an initial limit of 50 (arbitrary number or could be x% of 
totalCount). When the user scrolls to the end of the list, UI will trigger 
another call to the same API with limit + 50 to get 100 items. In a similar 
fashion, UI will keep loading results with infinite scroll like something 
similar to this demo - https://infinite-scroll.com/demo/full-page/ . Since, the 
results are not going to be paginated, there is no need for skip param support 
here in my opinion. 
   
   The only thing missing is the totalCount in the response of these APIs and 
that will be implemented as part of another JIRA if needed in the future. 
Infinite scroll could be implemented without total count but having total count 
in the UI will give better user experience. 
   
   Please let me know if you have any more questions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863371#comment-16863371
 ] 

Jonathan Eagles commented on HADOOP-16369:
--

No tests as only comments and pom files were updated

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r293522345
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -92,8 +95,10 @@ public Response getContainers() {
*/
   @GET
   @Path("/{id}")
-  public Response getKeysForContainer(@PathParam("id") Long containerId) {
-Map keyMetadataMap = new HashMap<>();
+  public Response getKeysForContainer(
+  @PathParam("id") Long containerId,
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
bharatviswa504 commented on a change in pull request #954: HDDS-1670. Add limit 
support to /api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#discussion_r293522160
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
 ##
 @@ -72,10 +74,11 @@
* @return {@link Response}
*/
   @GET
-  public Response getContainers() {
+  public Response getContainers(
+  @DefaultValue("-1") @QueryParam("limit") int limit) {
 Map containersMap;
 try {
-  containersMap = containerDBServiceProvider.getContainers();
+  containersMap = containerDBServiceProvider.getContainers(limit);
 } catch (IOException ioEx) {
 
 Review comment:
   I see the API to limit.
   But how we shall get the list of containers from the last returned result, 
will be there be an API support for that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863353#comment-16863353
 ] 

Hadoop QA commented on HADOOP-16366:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16366 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971709/HADOOP-16366-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 03ca4b8babc4 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 940bcf0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16322/testReport/ |
| Max. process+thread count | 317 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16322/console |
| 

[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293514874
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestPartialRenamesDeletes.java
 ##
 @@ -0,0 +1,871 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.IOException;
+import java.nio.file.AccessDeniedException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import com.amazonaws.services.s3.model.MultiObjectDeleteException;
+import com.google.common.base.Charsets;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AUtils;
+import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
+import org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation;
+import org.apache.hadoop.util.BlockingThreadPoolExecutorService;
+import org.apache.hadoop.util.DurationInfo;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.*;
+import static org.apache.hadoop.fs.s3a.Constants.*;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.MetricDiff;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.*;
+import static org.apache.hadoop.fs.s3a.S3AUtils.applyLocatedFiles;
+import static org.apache.hadoop.fs.s3a.Statistic.FILES_DELETE_REJECTED;
+import static org.apache.hadoop.fs.s3a.Statistic.OBJECT_DELETE_REQUESTS;
+import static org.apache.hadoop.fs.s3a.auth.RoleModel.Effects;
+import static org.apache.hadoop.fs.s3a.auth.RoleModel.Statement;
+import static org.apache.hadoop.fs.s3a.auth.RoleModel.directory;
+import static org.apache.hadoop.fs.s3a.auth.RoleModel.statement;
+import static org.apache.hadoop.fs.s3a.auth.RolePolicies.*;
+import static 
org.apache.hadoop.fs.s3a.auth.RoleTestUtils.bindRolePolicyStatements;
+import static org.apache.hadoop.fs.s3a.auth.RoleTestUtils.forbidden;
+import static org.apache.hadoop.fs.s3a.auth.RoleTestUtils.newAssumedRoleConfig;
+import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit;
+import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion;
+import static 
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteSupport.extractUndeletedPaths;
+import static 
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteSupport.removeUndeletedPaths;
+import static org.apache.hadoop.fs.s3a.test.ExtraAssertions.assertFileCount;
+import static org.apache.hadoop.fs.s3a.test.ExtraAssertions.extractCause;
+import static org.apache.hadoop.test.LambdaTestUtils.eval;
+
+/**
+ * Test partial failures of delete and rename operations, especially
+ * that the S3Guard tables are consistent with the state of
+ * the filesystem.
+ *
+ * All these test have a unique path for each run, with a roleFS having
+ * full RW access to part of it, and R/O access to a restricted subdirectory
+ *
+ * 
+ *   
+ * The tests are parameterized to single/multi delete, which control which
+ * of the two delete mechanisms are used.
+ *   
+ *   
+ * In multi delete, in a scale test run, a significantly larger set of 
files
+ * is created and then deleted.
+ *   
+ *   
+ * This isn't done in the single delete as it is much slower and it is not
+ * the situation we are trying to create.
+ *   
+ * 
+ *
+ * This test manages to create lots of load on the s3guard prune command
+ * 

[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293514525
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##
 @@ -699,39 +769,168 @@ DirListingMetadata 
getDirListingMetadataFromDirMetaAndList(Path path,
   }
 
   /**
-   * build the list of all parent entries.
+   * Build the list of all parent entries.
+   * 
+   * Thread safety: none. Callers must synchronize access.
+   * 
+   * Callers are required to synchronize on ancestorState.
* @param pathsToCreate paths to create
+   * @param ancestorState ongoing ancestor state.
* @return the full ancestry paths
*/
-  Collection completeAncestry(
-  Collection pathsToCreate) {
-// Key on path to allow fast lookup
-Map ancestry = new HashMap<>();
-
-for (DDBPathMetadata meta : pathsToCreate) {
+  private Collection completeAncestry(
+  final Collection pathsToCreate,
+  final AncestorState ancestorState) throws PathIOException {
+List ancestorsToAdd = new ArrayList<>(0);
+LOG.debug("Completing ancestry for {} paths", pathsToCreate.size());
+// we sort the inputs to guarantee that the topmost entries come first.
+// that way if the put request contains both parents and children
+// then the existing parents will not be re-created -they will just
+// be added to the ancestor list first.
+List sortedPaths = new ArrayList<>(pathsToCreate);
+sortedPaths.sort(PathOrderComparators.TOPMOST_PM_FIRST);
+for (DDBPathMetadata meta : sortedPaths) {
   Preconditions.checkArgument(meta != null);
   Path path = meta.getFileStatus().getPath();
+  LOG.debug("Adding entry {}", path);
   if (path.isRoot()) {
 break;
   }
-  ancestry.put(path, new DDBPathMetadata(meta));
+  // add the new entry
+  DDBPathMetadata entry = new DDBPathMetadata(meta);
+  DDBPathMetadata oldEntry = ancestorState.put(path, entry);
+  if (oldEntry != null) {
+if (!oldEntry.getFileStatus().isDirectory()
+|| !entry.getFileStatus().isDirectory()) {
+  // check for and warn if the existing bulk operation overwrote it.
+  // this should never occur outside tests explicitly crating it
+  LOG.warn("Overwriting a S3Guard entry created in the operation: {}",
+  oldEntry);
+  LOG.warn("With new entry: {}", entry);
+  // restore the old state
+  ancestorState.put(path, oldEntry);
+  // then raise an exception
+  throw new PathIOException(path.toString(), E_INCONSISTENT_UPDATE);
+} else {
+  // directory is already present, so skip adding it and any parents.
+  continue;
+}
+  }
+  ancestorsToAdd.add(entry);
   Path parent = path.getParent();
-  while (!parent.isRoot() && !ancestry.containsKey(parent)) {
+  while (!parent.isRoot()) {
+if (ancestorState.findEntry(parent, true)) {
+  break;
+}
 LOG.debug("auto-create ancestor path {} for child path {}",
 parent, path);
 final S3AFileStatus status = makeDirStatus(parent, username);
-ancestry.put(parent, new DDBPathMetadata(status, Tristate.FALSE,
-false));
+DDBPathMetadata md = new DDBPathMetadata(status, Tristate.FALSE,
+false);
+ancestorState.put(parent, md);
+ancestorsToAdd.add(md);
 parent = parent.getParent();
   }
 }
-return ancestry.values();
+return ancestorsToAdd;
+  }
+
+  /**
+   * {@inheritDoc}
+   * 
+   * if {@code operationState} is not null, when this method returns the
+   * operation state will be updated with all new entries created.
+   * This ensures that subsequent operations with the same store will not
+   * trigger new updates.
+   * The scan on
+   * @param qualifiedPath path to update
+   * @param operationState (nullable) operational state for a bulk update
+   * @throws IOException on failure.
+   */
+  @SuppressWarnings("SynchronizationOnLocalVariableOrMethodParameter")
+  @Override
+  @Retries.RetryTranslated
+  public void addAncestors(
+  final Path qualifiedPath,
+  @Nullable final BulkOperationState operationState) throws IOException {
+
+Collection newDirs = new ArrayList<>();
+final AncestorState ancestorState = extractOrCreate(operationState,
+BulkOperationState.OperationType.Rename);
+Path parent = qualifiedPath.getParent();
+
+// Iterate up the parents.
+// note that only ancestorState get/set operations are synchronized;
+// the DDB read between them is not. As a result, more than one
+// thread may probe the state, find the entry missing, do the database
+// query and add the entry.
+// This is done to avoid making the remote dynamo query part of the

[jira] [Commented] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863338#comment-16863338
 ] 

Hadoop QA commented on HADOOP-16369:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
57m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971705/HADOOP-16369.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  cc  |
| uname | Linux 06672f32f45a 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 940bcf0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16321/testReport/ |
| Max. process+thread count | 1348 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16321/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>

[GitHub] [hadoop] avijayanhwx commented on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-13 Thread GitBox
avijayanhwx commented on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-501803643
 
 
   LGTM +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #955: HDFS-14478: Add libhdfs APIs for openFile

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #955: HDFS-14478: Add libhdfs APIs for openFile
URL: https://github.com/apache/hadoop/pull/955#issuecomment-501800465
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1043 | trunk passed |
   | +1 | compile | 101 | trunk passed |
   | +1 | mvnsite | 25 | trunk passed |
   | +1 | shadedclient | 1832 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 15 | the patch passed |
   | +1 | compile | 96 | the patch passed |
   | +1 | cc | 96 | the patch passed |
   | +1 | javac | 96 | the patch passed |
   | +1 | mvnsite | 18 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 733 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 349 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3224 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/955 |
   | JIRA Issue | HDFS-14478 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit |
   | uname | Linux 38f9cb086df0 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 940bcf0 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/3/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/3/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863298#comment-16863298
 ] 

Prabhu Joseph commented on HADOOP-16366:


[~eyang] Thanks for the clarification. Don't see any issue with having same 
name for SPNEGO_FILTER and authentication filter. Will fix only the 
{{TimelineReaderServer}} ignores {{ProxyUserAuthenticationFilterInitializer}} 
issue alone.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16366:
---
Attachment: HADOOP-16366-003.patch

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #952: HADOOP-16729 out of band deletes

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #952: HADOOP-16729 out of band deletes
URL: https://github.com/apache/hadoop/pull/952#issuecomment-501793380
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1134 | trunk passed |
   | +1 | compile | 1134 | trunk passed |
   | +1 | checkstyle | 135 | trunk passed |
   | +1 | mvnsite | 123 | trunk passed |
   | +1 | shadedclient | 956 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 89 | trunk passed |
   | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 183 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 1080 | the patch passed |
   | +1 | javac | 1080 | the patch passed |
   | -0 | checkstyle | 152 | root: The patch generated 4 new + 50 unchanged - 1 
fixed = 54 total (was 51) |
   | +1 | mvnsite | 119 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 634 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 29 | hadoop-tools_hadoop-aws generated 3 new + 1 unchanged 
- 0 fixed = 4 total (was 1) |
   | -1 | findbugs | 122 | hadoop-common in the patch failed. |
   | -1 | findbugs | 15 | hadoop-aws in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | hadoop-common in the patch failed. |
   | -1 | unit | 30 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6263 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/952 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 7b5b3201c7bd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 940bcf0 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-952/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293483871
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/ProgressiveRenameTracker.java
 ##
 @@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3ObjectAttributes;
+import org.apache.hadoop.fs.s3a.impl.StoreContext;
+import org.apache.hadoop.util.DurationInfo;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static org.apache.hadoop.fs.s3a.s3guard.S3Guard.addMoveAncestors;
+import static org.apache.hadoop.fs.s3a.s3guard.S3Guard.addMoveDir;
+
+/**
+ * This rename tracker progressively updates the metadata store
+ * as it proceeds, during the parallelized copy operation.
+ * 
+ * Algorithm
+ * 
+ *   
+ * As {@code RenameTracker.fileCopied()} callbacks
+ * are raised, the metastore is updated with the new file entry.
+ *   
+ *   
+ * Including parent entries, as appropriate.
+ *   
+ *   
+ * All directories which have been created are tracked locally,
+ * to avoid needing to read the store; this is a thread-safe structure.
+ *   
+ *   
+ *The actual update is performed out of any synchronized block.
+ *   
+ *   
+ * When deletes are executed, the store is also updated.
+ *   
+ *   
+ * And at the completion of a successful rename, the source directory
+ * is also removed.
+ *   
+ * 
+ * 
+ *
+ * 
+ */
+public class ProgressiveRenameTracker extends RenameTracker {
+
+  /**
+   * The collection of paths to delete; this is added as individual files
+   * are renamed.
+   * 
+   * The metastore is only updated with these entries after the DELETE
+   * call containing these paths succeeds.
+   * 
+   * If the DELETE fails; the filesystem will use
+   * {@code MultiObjectDeleteSupport} to remove all successfully deleted
+   * entries from the metastore.
+   */
+  private final Collection pathsToDelete = new HashSet<>();
+
+  /**
+   * The list of new entries to add.
+   */
+  private final List destMetas = new ArrayList<>();
+
+  public ProgressiveRenameTracker(
+  final StoreContext storeContext,
+  final MetadataStore metadataStore,
+  final Path sourceRoot,
+  final Path dest,
+  final BulkOperationState operationState) {
+super("ProgressiveRenameTracker",
+storeContext, metadataStore, sourceRoot, dest, operationState);
+  }
+
+  /**
+   * When a file is copied, any ancestors
+   * are calculated and then the store is updated with
+   * the destination entries.
+   * 
+   * The source entries are added to the {@link #pathsToDelete} list.
+   * @param sourcePath path of source
+   * @param sourceAttributes status of source.
+   * @param destAttributes destination attributes
+   * @param destPath destination path.
+   * @param blockSize block size.
+   * @param addAncestors should ancestors be added?
+   * @throws IOException failure
+   */
+  @Override
+  public void fileCopied(
+  final Path sourcePath,
+  final S3ObjectAttributes sourceAttributes,
+  final S3ObjectAttributes destAttributes,
+  final Path destPath,
+  final long blockSize,
+  final boolean addAncestors) throws IOException {
+
+// build the list of entries to add in a synchronized block.
+final List entriesToAdd = new ArrayList<>(1);
+LOG.debug("Updating store with copied file {}", sourcePath);
+MetadataStore store = getMetadataStore();
+synchronized (this) {
+  checkArgument(!pathsToDelete.contains(sourcePath),
+  "File being renamed is already processed %s", destPath);
+  // create the file metadata and update the local structures.
+  S3Guard.addMoveFile(
+  store,
+  pathsToDelete,
+  entriesToAdd,
+  sourcePath,
+  destPath,
+  

[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293480453
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathOrderComparators.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import java.io.Serializable;
+import java.util.Comparator;
+
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Comparator of path ordering for sorting collections.
+ *
+ * The definition of "topmost" is:
+ * 
+ *   The depth of a path is the primary comparator.
+ *   Root is topmost, "0"
+ *   If two paths are of equal depth, {@link Path#compareTo(Path)}
+ *   is used. This delegates to URI compareTo.
+ *   repeated sorts do not change the order
+ * 
+ */
+final class PathOrderComparators {
+
+  private PathOrderComparators() {
+  }
+
+  /**
+   * The shallowest paths come first.
+   * This is to be used when adding entries.
+   */
+  static final Comparator TOPMOST_PATH_FIRST
+  = new TopmostFirst();
+
+  /**
+   * The leaves come first.
+   * This is to be used when deleting entries.
+   */
+  static final Comparator TOPMOST_PATH_LAST
+  = new TopmostLast();
+
+  /**
+   * The shallowest paths come first.
+   * This is to be used when adding entries.
+   */
+  static final Comparator TOPMOST_PM_FIRST
+  = new PathMetadataComparator(TOPMOST_PATH_FIRST);
+
+  /**
+   * The leaves come first.
+   * This is to be used when deleting entries.
+   */
+  static final Comparator TOPMOST_PM_LAST
+  = new PathMetadataComparator(TOPMOST_PATH_LAST);
+
+  private static class TopmostFirst implements Comparator, Serializable {
+
+@Override
+public int compare(Path pathL, Path pathR) {
+  // exist fast on equal values.
+  if (pathL.equals(pathR)) {
+return 0;
+  }
+  int depthL = pathL.depth();
+  int depthR = pathR.depth();
+  if (depthL < depthR) {
+// left is higher up than the right.
+return -1;
+  }
+  if (depthR < depthL) {
+// right is higher up than the left
+return 1;
+  }
+  // and if they are of equal depth, use the "classic" comparator
+  // of paths.
+  return pathL.compareTo(pathR);
+}
+  }
+
+  /**
+   * Compare the topmost last.
+   * For some reason the .reverse() option wasn't giving the
+   * correct outcome.
+   */
+  private static final class TopmostLast extends TopmostFirst {
+
+@Override
+public int compare(final Path pathL, final Path pathR) {
+  int compare = super.compare(pathL, pathR);
+  if (compare < 0) {
+return 1;
+  }
+  if (compare > 0) {
+return -1;
+  }
 
 Review comment:
   you'd think so, but when I tried my sort tests were failing -and even 
stepping through the code with the debugger couldn't work out why. Doing in 
like this fixed the tests


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293480057
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
 ##
 @@ -169,15 +186,33 @@ void move(Collection pathsToDelete,
   @RetryTranslated
   void put(PathMetadata meta) throws IOException;
 
+  /**
+   * Saves metadata for exactly one path, potentially
+   * using any bulk operation state to eliminate duplicate work.
+   *
 
 Review comment:
   None of the metastores are doing anything with delayed operations, just 
tracking it. Clarified in the javadocs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293480163
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathOrderComparators.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import java.io.Serializable;
+import java.util.Comparator;
+
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Comparator of path ordering for sorting collections.
+ *
+ * The definition of "topmost" is:
+ * 
+ *   The depth of a path is the primary comparator.
+ *   Root is topmost, "0"
+ *   If two paths are of equal depth, {@link Path#compareTo(Path)}
+ *   is used. This delegates to URI compareTo.
+ *   repeated sorts do not change the order
+ * 
+ */
+final class PathOrderComparators {
+
+  private PathOrderComparators() {
+  }
+
+  /**
+   * The shallowest paths come first.
+   * This is to be used when adding entries.
+   */
+  static final Comparator TOPMOST_PATH_FIRST
+  = new TopmostFirst();
+
+  /**
+   * The leaves come first.
+   * This is to be used when deleting entries.
+   */
+  static final Comparator TOPMOST_PATH_LAST
+  = new TopmostLast();
+
+  /**
+   * The shallowest paths come first.
+   * This is to be used when adding entries.
+   */
+  static final Comparator TOPMOST_PM_FIRST
+  = new PathMetadataComparator(TOPMOST_PATH_FIRST);
+
+  /**
+   * The leaves come first.
+   * This is to be used when deleting entries.
+   */
+  static final Comparator TOPMOST_PM_LAST
+  = new PathMetadataComparator(TOPMOST_PATH_LAST);
+
+  private static class TopmostFirst implements Comparator, Serializable {
+
+@Override
+public int compare(Path pathL, Path pathR) {
+  // exist fast on equal values.
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-13 Thread GitBox
steveloughran commented on a change in pull request #951: HADOOP-15183. S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#discussion_r293478114
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
 ##
 @@ -1474,6 +1474,18 @@ Caused by: java.lang.NullPointerException
   ... 1 more
 ```
 
+### Error `Attempt to change a resource which is still in use: Table is being 
deleted`
 
 Review comment:
   I do have branches of the unsquashed PRs, so I should be able to do that. By 
popular requeset I will give it a go, but leave them in here. If people get the 
other PR in first, I'll deal with that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16370) S3AFileSystem copyFile to propagate etag/version from getObjectMetadata to copy request

2019-06-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863273#comment-16863273
 ] 

Steve Loughran commented on HADOOP-16370:
-

+[~ben.roling]

Note that when we do a LIST operation against S3, we can collect the etags, but 
*not* the version info, so a bulk directory rename will not have version 
markers when the COPY is initiated, just etags. We could use the metadata to 
actually go versioned

> S3AFileSystem copyFile to propagate etag/version from getObjectMetadata to 
> copy request
> ---
>
> Key: HADOOP-16370
> URL: https://issues.apache.org/jira/browse/HADOOP-16370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Something to consider if we want: should the etag and version from the 
> initial getObjectMetadata call be propagated to the actual CopyRequest *if 
> they are not already known*
> That way, if we rename() a file and its etag/version is not known, we can fix 
> them for the next stage of the operation. Relevant given we are copying 
> metadata over, and for resilience to changes while the copy is taking place



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16370) S3AFileSystem copyFile to propagate etag/version from getObjectMetadata to copy request

2019-06-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16370:
---

 Summary: S3AFileSystem copyFile to propagate etag/version from 
getObjectMetadata to copy request
 Key: HADOOP-16370
 URL: https://issues.apache.org/jira/browse/HADOOP-16370
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


Something to consider if we want: should the etag and version from the initial 
getObjectMetadata call be propagated to the actual CopyRequest *if they are not 
already known*

That way, if we rename() a file and its etag/version is not known, we can fix 
them for the next stage of the operation. Relevant given we are copying 
metadata over, and for resilience to changes while the copy is taking place



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone RPC client to read with topology awareness.

2019-06-13 Thread GitBox
xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone 
RPC client to read with topology awareness.
URL: https://github.com/apache/hadoop/pull/931#discussion_r293472081
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -363,6 +363,10 @@
   "ozone.scm.network.topology.schema.file";
   public static final String OZONE_SCM_NETWORK_TOPOLOGY_SCHEMA_FILE_DEFAULT =
   "network-topology-default.xml";
+  public static final String DFS_NETWORK_TOPOLOGY_AWARE_READ_ENABLED =
+  "dfs.network.topology.aware.read.enable";
+  public static final String DFS_NETWORK_TOPOLOGY_AWARE_READ_ENABLED_DEFAULT =
+  "true";
 
 Review comment:
   Not at this time. Let's keep it as-is and revisit if we need to make this a 
server side option.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone RPC client to read with topology awareness.

2019-06-13 Thread GitBox
xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone 
RPC client to read with topology awareness.
URL: https://github.com/apache/hadoop/pull/931#discussion_r293471766
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
 ##
 @@ -150,11 +201,16 @@ public void releaseClient(XceiverClientSpi client, 
boolean invalidateClient) {
 }
   }
 
-  private XceiverClientSpi getClient(Pipeline pipeline)
+  private XceiverClientSpi getClient(Pipeline pipeline, boolean forRead)
   throws IOException {
 HddsProtos.ReplicationType type = pipeline.getType();
 try {
+  // create different client for read different pipeline node based on
+  // network topology
   String key = pipeline.getId().getId().toString() + type;
 
 Review comment:
   Can we wrap this logic in a help function like getPipelineKey()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863238#comment-16863238
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] Thank you for the explanation from your point of view.  
SpnegoFilter code path was a good effort to centralize AuthenticationFilter 
initialization for all web application.  Except other developers have made 
added extensions to make authentication filter independent of SpnegoFilter.  
Since both code paths are in use and both are meant to cover all paths 
globally.  It may create more problems if we allow FilterHolder for 
SpnegoFilter to report something that is not running.  SpnegoFilter and 
authentication filter are attached to different web application context, 
therefore, it doesn't overlap in general.  The only case that they would 
overlap is using embedded web proxy with resource manager.  Resource manager 
servlet are written as web filters, and attaching to the same web application 
context as web proxy.  In this case, we are using authentication filter because 
webproxy keytab and principal were not specified in config.  If we report 
SpnegoFilter with null path to down stream logic, it would be incorrect because 
resource manager has authentication filter for resource manager web application 
context.

This is the reason that I object to the one line change.  Do you see any 
problem, if the one line fix is not in place?

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #962: HDDS-1682. TestEventWatcher.testMetrics is flaky

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #962: HDDS-1682. TestEventWatcher.testMetrics 
is flaky
URL: https://github.com/apache/hadoop/pull/962#issuecomment-501773019
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 334 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 526 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 480 | the patch passed |
   | +1 | compile | 297 | the patch passed |
   | +1 | javac | 297 | the patch passed |
   | +1 | checkstyle | 94 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 655 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | the patch passed |
   | +1 | findbugs | 543 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 154 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1361 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6550 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/962 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8f67fc261bf2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 940bcf0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/1/testReport/ |
   | Max. process+thread count | 5030 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/framework U: hadoop-hdds/framework |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-962/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone RPC client to read with topology awareness.

2019-06-13 Thread GitBox
xiaoyuyao commented on a change in pull request #931: HDDS-1586. Allow Ozone 
RPC client to read with topology awareness.
URL: https://github.com/apache/hadoop/pull/931#discussion_r293460005
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
 ##
 @@ -596,6 +620,81 @@ private OmKeyArgs createKeyArgs(String toKeyName) throws 
IOException {
 return createBuilder().setKeyName(toKeyName).build();
   }
 
+  @Test
+  public void testLookupKeyWithLocation() throws IOException {
+String keyName = RandomStringUtils.randomAlphabetic(5);
+OmKeyArgs keyArgs = createBuilder()
+.setKeyName(keyName)
+.build();
+
+// lookup for a non-existent key
+try {
+  keyManager.lookupKey(keyArgs, null);
+  Assert.fail("Lookup key should fail for non existent key");
+} catch (OMException ex) {
+  if (ex.getResult() != OMException.ResultCodes.KEY_NOT_FOUND) {
+throw ex;
+  }
+}
+
+// create a key
+OpenKeySession keySession = keyManager.createFile(keyArgs, false, false);
+// randomly select 3 datanodes
+List nodeList = new ArrayList<>();
+nodeList.add((DatanodeDetails)scm.getClusterMap().getNode(
+0, null, null, null, null, 0));
+nodeList.add((DatanodeDetails)scm.getClusterMap().getNode(
+1, null, null, null, null, 0));
+nodeList.add((DatanodeDetails)scm.getClusterMap().getNode(
+2, null, null, null, null, 0));
+Assume.assumeTrue(nodeList.get(0) != nodeList.get(1));
 
 Review comment:
   should we use .equals() here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-16369:
-
Status: Patch Available  (was: Open)

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-16369:
-
Attachment: HADOOP-16369.001.patch

> Fix zstandard shortname misspelled as zts
> -
>
> Key: HADOOP-16369
> URL: https://issues.apache.org/jira/browse/HADOOP-16369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Attachments: HADOOP-16369.001.patch
>
>
> A few times in the code base zstd was misspelled as ztsd. zts is another 
> library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
> caused some grief.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16369) Fix zstandard shortname misspelled as zts

2019-06-13 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-16369:


 Summary: Fix zstandard shortname misspelled as zts
 Key: HADOOP-16369
 URL: https://issues.apache.org/jira/browse/HADOOP-16369
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles


A few times in the code base zstd was misspelled as ztsd. zts is another 
library https://github.com/yahoo/athenz/tree/master/clients/java/zts and has 
caused some grief.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-13 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863204#comment-16863204
 ] 

Steve Loughran commented on HADOOP-13980:
-

I'd like the ability to get a dump of the state in a format we could analyse 
for support calls, recreating problems etc, 

Proposed: make this Avro

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #952: HADOOP-16729 out of band deletes

2019-06-13 Thread GitBox
steveloughran commented on issue #952: HADOOP-16729 out of band deletes
URL: https://github.com/apache/hadoop/pull/952#issuecomment-501752524
 
 
   I have just pushed up a PR with changes. If I didn't need this in so that I 
could base my own PR atop it, I'd be seriously considering say "Use java 8 time 
over millis, as it guarantees that there won't be any bits of the code which 
assumes it is seconds"
   
   latter is awful about assessing the value of all enumerated moves & 
countermoves.
   
   In this instance, 
   
   ###  S3Guard.addAncestors()
   
   that walk up the tree uses an isDeleted() checl. Should that include TTL 
probes
   
   Note: I'm not going to do that now, because I've pushed that work further 
into DDB itself (needed to deal with scale issues); if changes are needed then 
they should be based off that patch. Please review that code and suggest the 
next action
   
   ### Imports
   
   keep that ordering of imports what we expect
   
   ```
   java.*
   ---
   javax.*
   ---
   non-org.apache
   ---
   org.apache
   ---
   
   static. * 
   ```
   `
   and in each one, in order. This is critical to help merge conflict. If a 
class already has inconsistent ordering, don't worry -but don't make it worse. 
Always check the imports in reviews for this reason.
   
   ### Tests
   I got a failure in a test run in teardown of `ITestDynamoDBMetadataStore`; 
happens if the FS didn't get created. Root cause shows up on some of the other 
test cases: `java.io.FileNotFoundException: DynamoDB table 
's3guard-stevel-testing' is being deleted in region eu-west-1
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1293)`
   
   ```
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   [ERROR] 
testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
  Time elapsed: 0.206 s  <<< ERROR! java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.strToPath(MetadataStoreTestBase.java:1035)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.tearDown(ITestDynamoDBMetadataStore.java:219)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   ```
   
   It's not caused by this PR and addressed in my rename PR, which doesn't do 
that cleanup unless fs != null.
   
   Also (I believe) unrelated, the Magic Committer ITest is playing up, and as 
the logs of the AM don't seem to be saved, can't quite debug.
   
   ```
   [ERROR] 
testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITestMagicCommitMRJob)  Time 
elapsed: 458.528 s  <<< FAILURE! java.lang.AssertionError: No cleanup: 
unexpectedly found 
s3a://hwdev-steve-ireland-new/fork-0004/test/testMRJob/__magic as  
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0004/test/testMRJob/__magic;
 isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:977)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathDoesNotExist(AbstractFSContractTestBase.java:305)
at 
org.apache.hadoop.fs.s3a.commit.magic.ITestMagicCommitMRJob.customPostExecutionValidation(ITestMagicCommitMRJob.java:96)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitMRJob.testMRJob(AbstractITCommitMRJob.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 

[GitHub] [hadoop] hadoop-yetus commented on issue #955: HDFS-14478: Add libhdfs APIs for openFile

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #955: HDFS-14478: Add libhdfs APIs for openFile
URL: https://github.com/apache/hadoop/pull/955#issuecomment-501748890
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1044 | trunk passed |
   | +1 | compile | 101 | trunk passed |
   | +1 | mvnsite | 17 | trunk passed |
   | +1 | shadedclient | 1775 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 14 | the patch passed |
   | +1 | compile | 95 | the patch passed |
   | -1 | cc | 95 | hadoop-hdfs-project_hadoop-hdfs-native-client generated 7 
new + 2 unchanged - 0 fixed = 9 total (was 2) |
   | +1 | javac | 95 | the patch passed |
   | +1 | mvnsite | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3107 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/955 |
   | JIRA Issue | HDFS-14478 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit |
   | uname | Linux 3d40962c95c1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 940bcf0 |
   | Default Java | 1.8.0_212 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/2/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/2/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #962: HDDS-1682. TestEventWatcher.testMetrics is flaky

2019-06-13 Thread GitBox
elek opened a new pull request #962: HDDS-1682. TestEventWatcher.testMetrics is 
flaky
URL: https://github.com/apache/hadoop/pull/962
 
 
   TestEventWatcher is intermittent. (Failed twice out of 44 executions).
   
   Error is:
   
   {code}
   Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.764 s <<< 
FAILURE! - in org.apache.hadoop.hdds.server.events.TestEventWatcher
   testMetrics(org.apache.hadoop.hdds.server.events.TestEventWatcher)  Time 
elapsed: 2.384 s  <<< FAILURE!
   java.lang.AssertionError: expected:<2> but was:<3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdds.server.events.TestEventWatcher.testMetrics(TestEventWatcher.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   {code}
   
   In the test we do the following:
   
1. fire start-event1
2. fire start-event2
3. fire start-event3
4. fire end-event1
5. wait
   
   Usually the event2 and event3 are timed out and event1 is completed but in 
case of an accidental time between 3 and 4 (in fact between 1 and 4) the event1 
also can be timed out.
   
   I improved the unit test and fixed the metrics calculation (completed 
message should be incremented only if it's not yet timed out).
   
   See: https://issues.apache.org/jira/browse/HDDS-1682


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #961: HDDS-1680. Create missing parent directories during the creation of HddsVolume dirs

2019-06-13 Thread GitBox
hadoop-yetus commented on issue #961: HDDS-1680. Create missing parent 
directories during the creation of HddsVolume dirs
URL: https://github.com/apache/hadoop/pull/961#issuecomment-501722334
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 284 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 825 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 334 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 467 | the patch passed |
   | +1 | compile | 294 | the patch passed |
   | +1 | javac | 294 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 541 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 172 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1369 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 6432 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-961/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/961 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0f6543094cfe 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 940bcf0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-961/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-961/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-961/1/testReport/ |
   | Max. process+thread count | 4548 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-961/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2

2019-06-13 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-16211:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update guava to 27.0-jre in hadoop-project branch-3.2
> -
>
> Key: HADOOP-16211
> URL: https://issues.apache.org/jira/browse/HADOOP-16211
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16211-branch-3.2.001.patch, 
> HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, 
> HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, 
> HADOOP-16211-branch-3.2.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2

2019-06-13 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863098#comment-16863098
 ] 

Sean Mackrory commented on HADOOP-16211:


Thanks for getting to the bottom of that. Will commit, pending verifying a good 
build locally still...

> Update guava to 27.0-jre in hadoop-project branch-3.2
> -
>
> Key: HADOOP-16211
> URL: https://issues.apache.org/jira/browse/HADOOP-16211
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16211-branch-3.2.001.patch, 
> HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, 
> HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, 
> HADOOP-16211-branch-3.2.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16213) Update guava to 27.0-jre in hadoop-project branch-3.1

2019-06-13 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-16213:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update guava to 27.0-jre in hadoop-project branch-3.1
> -
>
> Key: HADOOP-16213
> URL: https://issues.apache.org/jira/browse/HADOOP-16213
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-16213-branch-3.1.001.patch, 
> HADOOP-16213-branch-3.1.002.patch, HADOOP-16213-branch-3.1.003.patch, 
> HADOOP-16213-branch-3.1.004.patch, HADOOP-16213-branch-3.1.005.patch, 
> HADOOP-16213-branch-3.1.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.1 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16213) Update guava to 27.0-jre in hadoop-project branch-3.1

2019-06-13 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863097#comment-16863097
 ] 

Sean Mackrory commented on HADOOP-16213:


Thanks for getting to the bottom of that. Build looks good to me otherwise. 
Committed!

> Update guava to 27.0-jre in hadoop-project branch-3.1
> -
>
> Key: HADOOP-16213
> URL: https://issues.apache.org/jira/browse/HADOOP-16213
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-16213-branch-3.1.001.patch, 
> HADOOP-16213-branch-3.1.002.patch, HADOOP-16213-branch-3.1.003.patch, 
> HADOOP-16213-branch-3.1.004.patch, HADOOP-16213-branch-3.1.005.patch, 
> HADOOP-16213-branch-3.1.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.1 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2

2019-06-13 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863090#comment-16863090
 ] 

Peter Bacsko commented on HADOOP-16211:
---

On my machine, {{TestTimelineReaderWebServicesHBaseStorage}} is so bad that 
every single testcase fails. Looking at the Jenkins build result, the same 
thing happened. Wow, that's really bad. Created JIRA: YARN-9622

> Update guava to 27.0-jre in hadoop-project branch-3.2
> -
>
> Key: HADOOP-16211
> URL: https://issues.apache.org/jira/browse/HADOOP-16211
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16211-branch-3.2.001.patch, 
> HADOOP-16211-branch-3.2.002.patch, HADOOP-16211-branch-3.2.003.patch, 
> HADOOP-16211-branch-3.2.004.patch, HADOOP-16211-branch-3.2.005.patch, 
> HADOOP-16211-branch-3.2.006.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.
> This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
> particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >