[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-06 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834401#comment-16834401
 ] 

Akira Ajisaka commented on HADOOP-16299:


001 patch
Created a new profile that runs only when "-Djavac.version=11". The 
disadvantage is that when using Java 12 or upper and set "-Djavac.version" to 
12 or upper, this profile does not work.

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16299:
---
Assignee: Akira Ajisaka
Target Version/s: 3.3.0
  Status: Patch Available  (was: Open)

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16299:
---
Attachment: HADOOP-16299.001.patch

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #782: HDDS-1461. Optimize listStatus api in OzoneFileSystem

2019-05-06 Thread GitBox
mukul1987 commented on a change in pull request #782: HDDS-1461. Optimize 
listStatus api in OzoneFileSystem
URL: https://github.com/apache/hadoop/pull/782#discussion_r281460619
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1546,6 +1552,101 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
 ResultCodes.NOT_A_FILE);
   }
 
+  /**
+   * List the status for a file or a directory and its contents.
+   *
+   * @param args   Key args
+   * @param recursive  For a directory if true all the descendants of a
+   *   particular directory are listed
+   * @param startKey   Key from which listing needs to start. If startKey 
exists
+   *   its status is included in the final list.
+   * @param numEntries Number of entries to list from the start key
+   * @return list of file status
+   */
+  public List listStatus(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries) throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+
+List fileStatusList = new ArrayList<>();
+try {
+  metadataManager.getLock().acquireBucketLock(volumeName, bucketName);
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+startKey = OzoneFSUtils.addTrailingSlashIfNeeded(keyName);
+  }
+
+  String seekKeyInDb =
+  metadataManager.getOzoneKey(volumeName, bucketName, startKey);
+  String keyInDb = OzoneFSUtils.addTrailingSlashIfNeeded(
+  metadataManager.getOzoneKey(volumeName, bucketName, keyName));
+  TableIterator>
+  iterator = metadataManager.getKeyTable().iterator();
+  iterator.seek(seekKeyInDb);
+
+  if (!iterator.hasNext()) {
+return Collections.emptyList();
+  }
+
+  if (iterator.key().equals(keyInDb)) {
+// skip the key which needs to be listed
+iterator.next();
+  }
+
+  while (iterator.hasNext() && numEntries - fileStatusList.size() > 0) {
+String entryInDb = iterator.key();
+OmKeyInfo value = iterator.value().getValue();
+if (entryInDb.startsWith(keyInDb)) {
+  String entryKeyName = value.getKeyName();
+  if (recursive) {
+// for recursive list all the entries
+fileStatusList.add(new OzoneFileStatus(value, scmBlockSize,
+!OzoneFSUtils.isFile(entryKeyName)));
+iterator.next();
+  } else {
+// get the child of the directory to list from the entry. For
+// example if directory to list is /a and entry is /a/b/c where
+// c is a file. The immediate child is b which is a directory. c
+// should not be listed as child of a.
+String immediateChild = OzoneFSUtils
+.getImmediateChild(entryKeyName, keyName);
+boolean isFile = OzoneFSUtils.isFile(immediateChild);
+if (isFile) {
+  fileStatusList
+  .add(new OzoneFileStatus(value, scmBlockSize, !isFile));
+  iterator.next();
+} else {
+  // if entry is a directory
+  fileStatusList.add(new OzoneFileStatus(immediateChild));
+  // skip the other descendants of this child directory.
+  iterator.seek(
+  getNextGreaterString(volumeName, bucketName, 
immediateChild));
+}
+  }
+} else {
+  break;
+}
+  }
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+}
+return fileStatusList;
+  }
+
+  private String getNextGreaterString(String volumeName, String bucketName,
+  String keyPrefix) {
+// TODO: Use string codec
+// Increment the last character of the string and return the new ozone key.
+String nextPrefix = keyPrefix.substring(0, keyPrefix.length() - 1) +
+String.valueOf((char) (keyPrefix.charAt(keyPrefix.length() - 1) + 1));
 
 Review comment:
   This is a great optimization.
   Should this point to first character in the ASCII table ? Also lets verify 
that this for UTF-8 encoding as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-06 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834356#comment-16834356
 ] 

Akira Ajisaka commented on HADOOP-16299:


I thought it is no use compiling Apache Hadoop with Java 11 and setting the 
target version to 1.8. However, now I'm thinking it is useful for testing 
Apache Hadoop (compiled with Java 8) with Java 11.

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-06 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16299:
--

 Summary: [JDK 11] Build fails without specifying -Djavac.version=11
 Key: HADOOP-16299
 URL: https://issues.apache.org/jira/browse/HADOOP-16299
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


{{mvn install -DskipTests}} fails on Java 11 without specifying 
{{-Djavac.version=11}}.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-annotations: Fatal error compiling: error: option --add-exports 
not allowed with target 1.8 -> [Help 1]
{noformat}
HADOOP-15941 added {{--add-exports}} option when the java version is 11 but the 
option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-05-06 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834345#comment-16834345
 ] 

Akira Ajisaka commented on HADOOP-16115:


I verified the test passed with
* AdoptOpenJDK(HotSpot) 11.0.3+7 + MacOS 10.14.4
* OpenJDK 11.0.3+7-LTS + CentOS 7.6

However, when setting the javac target version to 11, the test failed with both 
environments.

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #795: HDDS-1491. Ozone KeyInputStream seek() should not read the chunk file.

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #795: HDDS-1491. Ozone KeyInputStream seek() 
should not read the chunk file.
URL: https://github.com/apache/hadoop/pull/795#issuecomment-489861096
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 399 | trunk passed |
   | +1 | compile | 199 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 835 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 412 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 412 | the patch passed |
   | +1 | compile | 290 | the patch passed |
   | +1 | javac | 290 | the patch passed |
   | +1 | checkstyle | 146 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 1162 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 325 | the patch passed |
   | +1 | findbugs | 668 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 162 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1302 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6795 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/795 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux db6e5c7123f2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 597fa47 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/2/testReport/ |
   | Max. process+thread count | 3555 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client U: hadoop-hdds/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #797: HDDS-1489. Unnecessary log messages on 
console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#issuecomment-489831647
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for branch |
   | +1 | mvninstall | 433 | trunk passed |
   | +1 | compile | 211 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 740 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 131 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 426 | the patch passed |
   | +1 | compile | 208 | the patch passed |
   | +1 | javac | 208 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 141 | hadoop-hdds in the patch failed. |
   | -1 | unit | 952 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 4322 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs |
   | uname | Linux 949ab61dfba0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 93f2283 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/1/testReport/ |
   | Max. process+thread count | 4370 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist hadoop-ozone/ozonefs U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-797/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config 
should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-489830654
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1045 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 488 | trunk passed |
   | +1 | compile | 233 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 905 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 526 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 487 | the patch passed |
   | +1 | compile | 216 | the patch passed |
   | +1 | javac | 216 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | the patch passed |
   | +1 | findbugs | 450 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 153 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1243 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7204 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs yamllint |
   | uname | Linux 964887b87752 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 597fa47 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/5/testReport/ |
   | Max. process+thread count | 4195 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/5/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
swagle commented on issue #797: HDDS-1489. Unnecessary log messages on console 
with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#issuecomment-489819230
 
 
   @arp7 To change the log level of the message would need a Ratis Jira since 
it is, 
org.apache.ratis.grpc.client.GrpcClientProtocolClient.AsyncStreamObservers, 
logging this message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16298) Manage/Renew delegation tokens for externally scheduled jobs

2019-05-06 Thread Pankaj (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj updated HADOOP-16298:

Attachment: Proposal for changes to UGI for managing_renewing externally 
managed delegation tokens.pdf

> Manage/Renew delegation tokens for externally scheduled jobs
> 
>
> Key: HADOOP-16298
> URL: https://issues.apache.org/jira/browse/HADOOP-16298
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.7.3, 2.9.0, 3.2.0, 3.3.0
>Reporter: Pankaj
>Priority: Major
> Fix For: 2.7.3, 2.9.0, 3.2.0, 3.3.0
>
> Attachments: Proposal for changes to UGI for managing_renewing 
> externally managed delegation tokens.pdf
>
>
> * Presently when jobs are run in the Hadoop ecosystem, the implicit 
> assumption is that YARN will be used as a scheduling agent with access to 
> appropriate keytabs for renewal of kerberos tickets and delegation tokens. 
>  * Jobs that interact with kerberized hadoop services such as hbase/hive/hdfs 
> and use an external scheduler such as Kubernetes, typically do not have 
> access to keytabs. In such cases, delegation tokens are a logical choice for 
> interacting with a kerberized cluster. These tokens are issued based on some 
> external auth mechanism (such as Kube LDAP authentication).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16289) Allow extra jsvc startup option in hadoop_start_secure_daemon in hadoop-functions.sh

2019-05-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834260#comment-16834260
 ] 

Hudson commented on HADOOP-16289:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16510 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16510/])
HADOOP-16289. Allow extra jsvc startup option in (weichiu: rev 
93f2283a69ea4e07a998f2a4065f238f9574921b)
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> Allow extra jsvc startup option in hadoop_start_secure_daemon in 
> hadoop-functions.sh
> 
>
> Key: HADOOP-16289
> URL: https://issues.apache.org/jira/browse/HADOOP-16289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16289.001.patch, HADOOP-16289.002.patch
>
>
> Due to different opinions in HADOOP-16276 and we might want to pull in more 
> people for discussion, I want to speed this up by making a simple change to 
> the script in this jira (which would have been included in HADOOP-16276), 
> that is, to add HADOOP_DAEMON_JSVC_EXTRA_OPTS to jsvc startup command which 
> allows users to specify their extra options for jsvc.
> CC [~tlipcon] [~hgadre] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config 
should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-489820176
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 509 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 422 | trunk passed |
   | +1 | compile | 210 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 771 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 238 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 426 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 406 | the patch passed |
   | +1 | compile | 219 | the patch passed |
   | +1 | javac | 219 | the patch passed |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 132 | the patch passed |
   | +1 | findbugs | 437 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 130 | hadoop-hdds in the patch failed. |
   | -1 | unit | 928 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5879 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs yamllint |
   | uname | Linux 618db41f1fc3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 597fa47 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/testReport/ |
   | Max. process+thread count | 4471 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/4/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shwetayakkali commented on a change in pull request #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
shwetayakkali commented on a change in pull request #797: HDDS-1489. 
Unnecessary log messages on console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#discussion_r281399103
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonefs/docker-config
 ##
 @@ -35,3 +35,4 @@ 
LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
HH:mm:ss} %-5p %c{1}:%L - %m%n
 LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
+LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
 
 Review comment:
   checkstyle warning to add a new line at the end of file.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shwetayakkali commented on a change in pull request #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
shwetayakkali commented on a change in pull request #797: HDDS-1489. 
Unnecessary log messages on console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#discussion_r281399158
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
 ##
 @@ -36,3 +36,4 @@ 
LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd HH
 LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
 
LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
+LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
 
 Review comment:
   checkstyle warning here to add a new line at the end of file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16298) Manage/Renew delegation tokens for externally scheduled jobs

2019-05-06 Thread Pankaj (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj updated HADOOP-16298:

Attachment: (was: Proposal for changes to UGI for managing_renewing 
externally managed delegation tokens.pdf)

> Manage/Renew delegation tokens for externally scheduled jobs
> 
>
> Key: HADOOP-16298
> URL: https://issues.apache.org/jira/browse/HADOOP-16298
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.7.3, 2.9.0, 3.2.0, 3.3.0
>Reporter: Pankaj
>Priority: Major
> Fix For: 2.7.3, 2.9.0, 3.2.0, 3.3.0
>
>
> * Presently when jobs are run in the Hadoop ecosystem, the implicit 
> assumption is that YARN will be used as a scheduling agent with access to 
> appropriate keytabs for renewal of kerberos tickets and delegation tokens. 
>  * Jobs that interact with kerberized hadoop services such as hbase/hive/hdfs 
> and use an external scheduler such as Kubernetes, typically do not have 
> access to keytabs. In such cases, delegation tokens are a logical choice for 
> interacting with a kerberized cluster. These tokens are issued based on some 
> external auth mechanism (such as Kube LDAP authentication).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle opened a new pull request #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
swagle opened a new pull request #797: HDDS-1489. Unnecessary log messages on 
console with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797
 
 
   cc: @arp7 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16298) Manage/Renew delegation tokens for externally scheduled jobs

2019-05-06 Thread Pankaj (JIRA)
Pankaj created HADOOP-16298:
---

 Summary: Manage/Renew delegation tokens for externally scheduled 
jobs
 Key: HADOOP-16298
 URL: https://issues.apache.org/jira/browse/HADOOP-16298
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.2.0, 2.9.0, 2.7.3, 3.3.0
Reporter: Pankaj
 Fix For: 3.3.0, 3.2.0, 2.9.0, 2.7.3
 Attachments: Proposal for changes to UGI for managing_renewing 
externally managed delegation tokens.pdf

* Presently when jobs are run in the Hadoop ecosystem, the implicit assumption 
is that YARN will be used as a scheduling agent with access to appropriate 
keytabs for renewal of kerberos tickets and delegation tokens. 
 * Jobs that interact with kerberized hadoop services such as hbase/hive/hdfs 
and use an external scheduler such as Kubernetes, typically do not have access 
to keytabs. In such cases, delegation tokens are a logical choice for 
interacting with a kerberized cluster. These tokens are issued based on some 
external auth mechanism (such as Kube LDAP authentication).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-05-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834255#comment-16834255
 ] 

Siyao Meng commented on HADOOP-16115:
-

I just ran TestHttpServer tests, including testJersey, with OpenJDK 11.0.3u9 on 
Ubuntu 19.04 (w/ branch-3.1.2 + HADOOP-12760 + HADOOP-15783 + HDFS-13932 + 
HADOOP-15775 + HADOOP-16016) and found this test passed. Probably already fixed 
in OpenJDK 11.0.3?
{code}
$JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64 mvn test -fn 
-Dsurefire.printSummary -Dtest=TestHttpServer
..
[INFO] Running org.apache.hadoop.http.TestHttpServer
[INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.854 s 
- in org.apache.hadoop.http.TestHttpServer
...
{code}

It would help if somebody would also verify this on their setup.

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #797: HDDS-1489. Unnecessary log messages on console with Ozone shell.

2019-05-06 Thread GitBox
arp7 commented on issue #797: HDDS-1489. Unnecessary log messages on console 
with Ozone shell.
URL: https://github.com/apache/hadoop/pull/797#issuecomment-489817144
 
 
   Instead of setting log threshold to WARN, would it be better to move the 
offending log messsages to DEBUG level?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16115) [JDK 11] TestHttpServer#testJersey fails

2019-05-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834255#comment-16834255
 ] 

Siyao Meng edited comment on HADOOP-16115 at 5/6/19 10:52 PM:
--

I just ran TestHttpServer tests, including testJersey, with OpenJDK 11.0.3u7 
(build 11.0.3+7-Ubuntu-1ubuntu1) on Ubuntu 19.04 (w/ branch-3.1.2 + 
HADOOP-12760 + HADOOP-15783 + HDFS-13932 + HADOOP-15775 + HADOOP-16016) and 
found this test passed. Probably already fixed in OpenJDK 11.0.3?
{code}
$JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64 mvn test -fn 
-Dsurefire.printSummary -Dtest=TestHttpServer
..
[INFO] Running org.apache.hadoop.http.TestHttpServer
[INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.854 s 
- in org.apache.hadoop.http.TestHttpServer
...
{code}

It would help if somebody would also verify this on their setup.


was (Author: smeng):
I just ran TestHttpServer tests, including testJersey, with OpenJDK 11.0.3u9 on 
Ubuntu 19.04 (w/ branch-3.1.2 + HADOOP-12760 + HADOOP-15783 + HDFS-13932 + 
HADOOP-15775 + HADOOP-16016) and found this test passed. Probably already fixed 
in OpenJDK 11.0.3?
{code}
$JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64 mvn test -fn 
-Dsurefire.printSummary -Dtest=TestHttpServer
..
[INFO] Running org.apache.hadoop.http.TestHttpServer
[INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.854 s 
- in org.apache.hadoop.http.TestHttpServer
...
{code}

It would help if somebody would also verify this on their setup.

> [JDK 11] TestHttpServer#testJersey fails
> 
>
> Key: HADOOP-16115
> URL: https://issues.apache.org/jira/browse/HADOOP-16115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] Running org.apache.hadoop.http.TestHttpServer
> [ERROR] Tests run: 26, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 5.954 s <<< FAILURE! - in org.apache.hadoop.http.TestHttpServer
> [ERROR] testJersey(org.apache.hadoop.http.TestHttpServer)  Time elapsed: 
> 0.128 s  <<< ERROR!
> java.io.IOException: Server returned HTTP response code: 500 for URL: 
> http://localhost:40339/jersey/foo?op=bar
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1913)
>   at 
> java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1509)
>   at 
> org.apache.hadoop.http.HttpServerFunctionalTest.readOutput(HttpServerFunctionalTest.java:260)
>   at 
> org.apache.hadoop.http.TestHttpServer.testJersey(TestHttpServer.java:526)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> 

[jira] [Updated] (HADOOP-16289) Allow extra jsvc startup option in hadoop_start_secure_daemon in hadoop-functions.sh

2019-05-06 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16289:
-
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch and review, y'all!

> Allow extra jsvc startup option in hadoop_start_secure_daemon in 
> hadoop-functions.sh
> 
>
> Key: HADOOP-16289
> URL: https://issues.apache.org/jira/browse/HADOOP-16289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16289.001.patch, HADOOP-16289.002.patch
>
>
> Due to different opinions in HADOOP-16276 and we might want to pull in more 
> people for discussion, I want to speed this up by making a simple change to 
> the script in this jira (which would have been included in HADOOP-16276), 
> that is, to add HADOOP_DAEMON_JSVC_EXTRA_OPTS to jsvc startup command which 
> allows users to specify their extra options for jsvc.
> CC [~tlipcon] [~hgadre] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16289) Allow extra jsvc startup option in hadoop_start_secure_daemon in hadoop-functions.sh

2019-05-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834248#comment-16834248
 ] 

Wei-Chiu Chuang commented on HADOOP-16289:
--

Committing based on Todd's +1

> Allow extra jsvc startup option in hadoop_start_secure_daemon in 
> hadoop-functions.sh
> 
>
> Key: HADOOP-16289
> URL: https://issues.apache.org/jira/browse/HADOOP-16289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16289.001.patch, HADOOP-16289.002.patch
>
>
> Due to different opinions in HADOOP-16276 and we might want to pull in more 
> people for discussion, I want to speed this up by making a simple change to 
> the script in this jira (which would have been included in HADOOP-16276), 
> that is, to add HADOOP_DAEMON_JSVC_EXTRA_OPTS to jsvc startup command which 
> allows users to specify their extra options for jsvc.
> CC [~tlipcon] [~hgadre] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-489761040
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1155 | trunk passed |
   | +1 | compile | 1069 | trunk passed |
   | +1 | checkstyle | 152 | trunk passed |
   | +1 | mvnsite | 121 | trunk passed |
   | +1 | shadedclient | 1061 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 94 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 185 | trunk passed |
   | -0 | patch | 98 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 966 | the patch passed |
   | +1 | javac | 966 | the patch passed |
   | -0 | checkstyle | 148 | root: The patch generated 54 new + 54 unchanged - 
1 fixed = 108 total (was 55) |
   | +1 | mvnsite | 118 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 734 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 31 | hadoop-tools_hadoop-aws generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) |
   | -1 | findbugs | 73 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 513 | hadoop-common in the patch passed. |
   | +1 | unit | 277 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7531 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostFirst 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
52-72] |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostLast 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
76-87] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux f565774759ab 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fb7c1ca |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/testReport/ |
   | Max. process+thread count | 1430 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/21/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries 

[GitHub] [hadoop] xiaoyuyao commented on issue #726: HDDS-1424. Support multi-container robot test execution

2019-05-06 Thread GitBox
xiaoyuyao commented on issue #726: HDDS-1424. Support multi-container robot 
test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-489751889
 
 
   +1 from me too. This will allow more acceptance tests to be added easily. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-06 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-16294:
--
Status: Patch Available  (was: In Progress)

> Enable access to input options by DistCp subclasses
> ---
>
> Key: HADOOP-16294
> URL: https://issues.apache.org/jira/browse/HADOOP-16294
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Trivial
>
> In the DistCp class, the DistCpOptions are private with no getter method 
> allowing retrieval by subclasses. So a subclass would need to save its own 
> copy of the inputOptions supplied to its constructor, if it wishes to 
> override the createInputFileListing method with logic similar to the original 
> implementation, i.e. calling CopyListing#buildListing with a path and input 
> options.
> I propose adding to DistCp this method,
> {noformat}
>   protected DistCpOptions getInputOptions() {
> return inputOptions;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-06 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16294 started by Andrew Olson.
-
> Enable access to input options by DistCp subclasses
> ---
>
> Key: HADOOP-16294
> URL: https://issues.apache.org/jira/browse/HADOOP-16294
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Trivial
>
> In the DistCp class, the DistCpOptions are private with no getter method 
> allowing retrieval by subclasses. So a subclass would need to save its own 
> copy of the inputOptions supplied to its constructor, if it wishes to 
> override the createInputFileListing method with logic similar to the original 
> implementation, i.e. calling CopyListing#buildListing with a path and input 
> options.
> I propose adding to DistCp this method,
> {noformat}
>   protected DistCpOptions getInputOptions() {
> return inputOptions;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-05-06 Thread GitBox
arp7 commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281307008
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonefs/test.sh
 ##
 @@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm ozonefs/ozonefs.robot
+
+
+## TODO: As of the hhe o3fs tests are unstable.
 
 Review comment:
   Minor: typo.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #726: HDDS-1424. Support multi-container robot test execution

2019-05-06 Thread GitBox
arp7 commented on issue #726: HDDS-1424. Support multi-container robot test 
execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-489743634
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-05-06 Thread GitBox
arp7 commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r281307184
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozones3/test.sh
 ##
 @@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+start_docker_env
+
+execute_robot_test scm basic/basic.robot
 
 Review comment:
   Is it deliberate to rerun the basic test within each sub-test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16289) Allow extra jsvc startup option in hadoop_start_secure_daemon in hadoop-functions.sh

2019-05-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834126#comment-16834126
 ] 

Siyao Meng commented on HADOOP-16289:
-

Thanks [~tlipcon] and [~hgadre] for reviewing.

> Allow extra jsvc startup option in hadoop_start_secure_daemon in 
> hadoop-functions.sh
> 
>
> Key: HADOOP-16289
> URL: https://issues.apache.org/jira/browse/HADOOP-16289
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16289.001.patch, HADOOP-16289.002.patch
>
>
> Due to different opinions in HADOOP-16276 and we might want to pull in more 
> people for discussion, I want to speed this up by making a simple change to 
> the script in this jira (which would have been included in HADOOP-16276), 
> that is, to add HADOOP_DAEMON_JSVC_EXTRA_OPTS to jsvc startup command which 
> allows users to specify their extra options for jsvc.
> CC [~tlipcon] [~hgadre] [~jojochuang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #745: HDDS-1441. Remove usage of getRetryFailureException. (swagle)

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #745: HDDS-1441. Remove usage of 
getRetryFailureException. (swagle)
URL: https://github.com/apache/hadoop/pull/745#issuecomment-489724608
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 405 | trunk passed |
   | +1 | compile | 203 | trunk passed |
   | +1 | checkstyle | 47 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 796 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 417 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 15 | hadoop-ozone in the patch failed. |
   | -1 | compile | 18 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 18 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 17 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 14 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 628 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 17 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3067 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/745 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 007e5e55bb1d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 12b7059 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-745/out/maven-patch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-745/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/7/artifact/out/patch-findbugs-hadoop-ozone.txt
 

[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834009#comment-16834009
 ] 

Eric Yang commented on HADOOP-16091:


{quote}I wouldn't like to create an additional thread here, but I think 
release:release goal is not part of the fundamental design of maven. This is 
just a maven plugin which can be replaced better release plugin or other 
processes. (I would say that life-cycle/goal bindings or profiles are part of 
the design)

BTW I think the "release:release" plugin has some design problems, but it's an 
other story (But I prefer to not use it, for example because the created tags 
are pushed too early).{quote}

Maven release plugin is part of Apache website, and source code is also part of 
maven.  Release plugin is reference implementation for performing  "maven 
deploy".  The last part of maven life cycle.  Without doing deploy, the project 
is only playing in a sandbox maven environment without full fledged continuous 
integration and delivery systems.  We can remain disagree on proper usage of 
maven because Hadoop's maven build is 1/3 broken (nothing for deploy phase), 
and 1/3 hackish (not building tarball artifacts that is maven aware).  There 
are many hackish CI developers that put up 
[articles|https://axelfontaine.com/blog/dead-burried.html] to show other 
alternate methods to set version number.  It can work in limited scope, and 
easy to crash maven and nexus, when some validations are taken away, such as 
parent version number and dependency versioning numbers.  It is harder to get 
release plugin correct because two Nexus (SNAPSHOT and release repositories) 
needs to be setup to have end to end flow working.  By doing promotion from 
SNAPSHOT repository to release repository, the voting procedure for a release 
would have been easier for Hadoop.  I don't expect everyone to follow procedure 
to the letter, but it sometimes saves time to learn from someone with more 
experiences.

{quote}Do you have any other problem with the k8s-dev approach?{quote}

I don't like pod design in kubernetes.  It is a one way to solve 
multi-processes grouping from a narrow point of view.  If you buy into 
[ksync|https://vapor-ware.github.io/ksync/] like design that binaries are 
shared outside of docker image.  It only complicates binary delivery in 
distributed environment.  The development mental model can work in small scale, 
but it has problem scaling to multi-nodes.  I recommend to avoid this design 
for Hadoop related project to reduce the messy difference between development 
environment and production environment.  It only makes binary distribution 
problem harder for bloated installer projects like Ambari and Bigtop.  I have 
done my preaching to the choir in this area and it is up to you.

{quote}I think the containerized word is different. For multiple versions we 
need to use different containers therefore we don't need to add version inside 
the containers any more.{quote}

What make special case for container when there is option to allow container to 
appear same as host system?  What if we like to symlink /opt/apache/ozone/logs 
to /var/log/ozone?  There are many usage of symlink in the containers other 
than the binary package versioned directory.

{quote}Usually I don't think the examples are good arguments (I think it's more 
important to find the right solution instead of following existing practices) 
but I checked spark images (which can be created by bin/docker-image-tool.sh 
from the spark distribution) and they also use /opt/spark. (But I a fine to use 
/opt/apache/ozone if you prefer it. I like the apache subdir.){quote}

I am ok with using /opt/apache/ozone, but still prefer using symlink that 
people can find out the version number without look into the content of 
subdirectories.

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process 

[jira] [Updated] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted

2019-05-06 Thread eBugs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eBugs updated HADOOP-16295:
---
Description: 
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
 

An {{IOException}} can mean many different errors, while the error message 
indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
problem. For example, the callers trying to handle other {{IOException}} may 
accidentally (and incorrectly) handle the interrupt. An 
{{InterruptedIOException}} may be better here.

  was:
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
 

An {{IOException}} can mean many different errors, while the error message 
indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
problem. For example, the callers trying to handle other {{IOException}} may 
accidentally (and incorrectly) handle the interrupt. An 
{{InterruptedIOException}} may be better here.


> FileUtil.replaceFile() throws an IOException when it is interrupted
> ---
>
> Key: HADOOP-16295
> URL: https://issues.apache.org/jira/browse/HADOOP-16295
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: eBugs
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
> Line: 1387
> {code:java}
> throw new IOException("replaceFile interrupted.");{code}
>  
> An {{IOException}} can mean many different errors, while the error message 
> indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
> problem. For example, the callers trying to handle other {{IOException}} may 
> accidentally (and incorrectly) handle the interrupt. An 
> {{InterruptedIOException}} may be better here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16297) MiniKdc.main() throws a RuntimeException when the work directory does not exist

2019-05-06 Thread eBugs (JIRA)
eBugs created HADOOP-16297:
--

 Summary: MiniKdc.main() throws a RuntimeException when the work 
directory does not exist
 Key: HADOOP-16297
 URL: https://issues.apache.org/jira/browse/HADOOP-16297
 Project: Hadoop Common
  Issue Type: Bug
Reporter: eBugs


Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java

Line: 86
{code:java}
if (!workDir.exists()) {
  throw new RuntimeException("Specified work directory does not exists: "
  + workDir.getAbsolutePath());
}{code}
 

{{RuntimeException}} is usually used to represent errors in the program logic 
(think of one of its subclasses, {{NullPointerException}}), while the error 
message indicates that the {{workDir}} does not exist. This mismatch could be a 
problem. For example, the callers may miss the "directory does not exist" 
scenario because an inaccurate exception is thrown. Or, the callers trying to 
handle other {{RuntimeException}} may accidentally (and incorrectly) handle the 
"directory does not exist" scenario. Throwing a {{FileNotFoundException}} may 
be more accurate here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on issue #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-05-06 Thread GitBox
DadanielZ commented on issue #768: HADOOP-16269. ABFS: add listFileStatus with 
StartFrom.
URL: https://github.com/apache/hadoop/pull/768#issuecomment-489676777
 
 
   Can anyone help to review or commit this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16279 started by Gabor Bota.
---
> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16296) MiniKdc() throws a RuntimeException when it fails to create its workDir

2019-05-06 Thread eBugs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eBugs updated HADOOP-16296:
---
Description: 
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java

Line: 223
{code:java}
throw new RuntimeException("Cannot create directory " + this.workDir);{code}
 

{{RuntimeException}} is usually used to represent errors in the program logic 
(think of one of its subclasses, {{NullPointerException}}), while the error 
message indicates that {{MiniKdc()}} can't create the directory {{workDir}}. 
This mismatch could be a problem. For example, the callers may miss the case 
where {{MiniKdc()}} fails to create the directory. Or, the callers trying to 
handle other {{RuntimeException}} may accidentally (and incorrectly) handle the 
directory creation failure. Maybe throwing an {{IOException}} is better here.

  was:
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java

Line: 223
{code:java}
throw new RuntimeException("Allocation file " + url
+ " found on the classpath is not on the local filesystem.");{code}
 

{{RuntimeException}} is usually used to represent errors in the program logic 
(think of one of its subclasses, {{NullPointerException}}), while the error 
message indicates that {{MiniKdc()}} can't create the directory {{workDir}}. 
This mismatch could be a problem. For example, the callers may miss the case 
where {{MiniKdc()}} fails to create the directory. Or, the callers trying to 
handle other {{RuntimeException}} may accidentally (and incorrectly) handle the 
directory creation failure. Maybe throwing an {{IOException}} is better here.


> MiniKdc() throws a RuntimeException when it fails to create its workDir
> ---
>
> Key: HADOOP-16296
> URL: https://issues.apache.org/jira/browse/HADOOP-16296
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: eBugs
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
> Line: 223
> {code:java}
> throw new RuntimeException("Cannot create directory " + this.workDir);{code}
>  
> {{RuntimeException}} is usually used to represent errors in the program logic 
> (think of one of its subclasses, {{NullPointerException}}), while the error 
> message indicates that {{MiniKdc()}} can't create the directory {{workDir}}. 
> This mismatch could be a problem. For example, the callers may miss the case 
> where {{MiniKdc()}} fails to create the directory. Or, the callers trying to 
> handle other {{RuntimeException}} may accidentally (and incorrectly) handle 
> the directory creation failure. Maybe throwing an {{IOException}} is better 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq removed a comment on issue #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2019-05-06 Thread GitBox
jiwq removed a comment on issue #766: YARN-9509: Added a configuration for 
admins to be able to capped per-container cpu usage based on a multiplier
URL: https://github.com/apache/hadoop/pull/766#issuecomment-489671681
 
 
   +1 (non-binding)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16296) MiniKdc() throws a RuntimeException when it fails to create its workDir

2019-05-06 Thread eBugs (JIRA)
eBugs created HADOOP-16296:
--

 Summary: MiniKdc() throws a RuntimeException when it fails to 
create its workDir
 Key: HADOOP-16296
 URL: https://issues.apache.org/jira/browse/HADOOP-16296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: eBugs


Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java

Line: 223
{code:java}
throw new RuntimeException("Allocation file " + url
+ " found on the classpath is not on the local filesystem.");{code}
 

{{RuntimeException}} is usually used to represent errors in the program logic 
(think of one of its subclasses, {{NullPointerException}}), while the error 
message indicates that {{MiniKdc()}} can't create the directory {{workDir}}. 
This mismatch could be a problem. For example, the callers may miss the case 
where {{MiniKdc()}} fails to create the directory. Or, the callers trying to 
handle other {{RuntimeException}} may accidentally (and incorrectly) handle the 
directory creation failure. Maybe throwing an {{IOException}} is better here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on issue #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2019-05-06 Thread GitBox
jiwq commented on issue #766: YARN-9509: Added a configuration for admins to be 
able to capped per-container cpu usage based on a multiplier
URL: https://github.com/apache/hadoop/pull/766#issuecomment-489671681
 
 
   +1 (non-binding)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted

2019-05-06 Thread eBugs in Cloud Systems (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eBugs in Cloud Systems updated HADOOP-16295:

Description: 
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
 

An {{IOException}} can mean many different errors, while the error message 
indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
problem. For example, the callers trying to handle other {{IOException}} may 
accidentally (and incorrectly) handle the interrupt. An 
{{InterruptedIOException}} may be better here.

  was:
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions. 
Since we are not very familiar with Hadoop's internal work flow, could you 
please help us verify if this is a bug, i.e., will the callers have trouble 
handling the exception, and will the users/admins have trouble diagnosing the 
failure?

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
Reason: An {{IOException}} can mean many different errors, while the error 
message indicates that {{replaceFile()}} is interrupted. Will this mismatch be 
a problem? For example, will the callers trying to handle other {{IOException}} 
accidentally (and incorrectly) handle the interrupt? Is an 
{{InterruptedIOException}} a better exception here?


> FileUtil.replaceFile() throws an IOException when it is interrupted
> ---
>
> Key: HADOOP-16295
> URL: https://issues.apache.org/jira/browse/HADOOP-16295
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: eBugs in Cloud Systems
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message seem to indicate different error 
> conditions.
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
> Line: 1387
> {code:java}
> throw new IOException("replaceFile interrupted.");{code}
>  
> An {{IOException}} can mean many different errors, while the error message 
> indicates that {{replaceFile()}} is interrupted. This mismatch could be a 
> problem. For example, the callers trying to handle other {{IOException}} may 
> accidentally (and incorrectly) handle the interrupt. An 
> {{InterruptedIOException}} may be better here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833781#comment-16833781
 ] 

Gabor Bota edited comment on HADOOP-16279 at 5/6/19 3:07 PM:
-

I started to work on this issue with the following principles:
 * Use the same ttl for entries and authoritative directory listing
 * Entries which are not directories can be expired. Then the returned metadata 
from the MS will be null.
 * Add two new methods {{pruneExpiredTtl()}} and {{pruneExpiredTtl(String 
keyPrefix)}} to {{MetadataStore}} interface. These methods will delete all 
expired metadata from the ms.
 * I will use {{last_updated}} field in ms for both file metadata and 
authoritative directory expiry.


was (Author: gabor.bota):
I started to work on this issue with the following principles:
 * Use the same ttl for entries and authoritative directory listing
 * Entries which are not directories can be expired. Then the returned metadata 
from the MS will be null.
 * Add two new methods {{pruneExpiredTtl()}} and {{pruneExpiredTtl(String 
keyPrefix)}} to {{MetadataStore}} interface. These methods will delete all 
expired metadata from the ms.

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16279:

Comment: was deleted

(was: I will use {{last_updated}} field in ms for both file metadata and 
authoritative directory expiry.

 )

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833897#comment-16833897
 ] 

Gabor Bota commented on HADOOP-16279:
-

I will use {{last_updated}} field in ms for both file metadata and 
authoritative directory expiry.

 

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16050) s3a SSL connections should use OpenSSL

2019-05-06 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833878#comment-16833878
 ] 

Sahil Takiar commented on HADOOP-16050:
---

[~ste...@apache.org] address the comments on the PR. Could you take another 
look? https://github.com/apache/hadoop/pull/784

> s3a SSL connections should use OpenSSL
> --
>
> Key: HADOOP-16050
> URL: https://issues.apache.org/jira/browse/HADOOP-16050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.1
>Reporter: Justin Uang
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Screen Shot 2019-01-17 at 2.57.06 PM.png
>
>
> We have found that when running the S3AFileSystem, it picks GCM as the ssl 
> cipher suite. Unfortunately this is well known to be slow on java 8: 
> [https://stackoverflow.com/questions/25992131/slow-aes-gcm-encryption-and-decryption-with-java-8u20.]
>  
> In practice we have seen that it can take well over 50% of our CPU time in 
> spark workflows. We should add an option to set the list of cipher suites we 
> would like to use. !Screen Shot 2019-01-17 at 2.57.06 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted

2019-05-06 Thread eBugs in Cloud Systems (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eBugs in Cloud Systems updated HADOOP-16295:

Description: 
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions. 
Since we are not very familiar with Hadoop's internal work flow, could you 
please help us verify if this is a bug, i.e., will the callers have trouble 
handling the exception, and will the users/admins have trouble diagnosing the 
failure?

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
Reason: An {{IOException}} can mean many different errors, while the error 
message indicates that {{replaceFile()}} is interrupted. Will this mismatch be 
a problem? For example, will the callers trying to handle other {{IOException}} 
accidentally (and incorrectly) handle the interrupt? Is an 
{{InterruptedIOException}} a better exception here?

  was:
Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions. 
Since we are not very familiar with Hadoop's internal work flow, could you 
please help us verify if this is a bug, i.e., will the callers have trouble 
handling the exception, and will the users/admins have trouble diagnosing the 
failure?

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
Reason: An {{IOException}} can mean many different errors, while the error 
message indicates that {{replaceFile()}} is interrupted. Will this mismatch be 
a problem? For example, will the callers try to handle other {{IOException}} 
accidentally (and incorrectly) handle the interrupt? Is an 
{{InterruptedIOException}} a better exception here?


> FileUtil.replaceFile() throws an IOException when it is interrupted
> ---
>
> Key: HADOOP-16295
> URL: https://issues.apache.org/jira/browse/HADOOP-16295
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: eBugs in Cloud Systems
>Priority: Minor
>
> Dear Hadoop developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted the following {{throw}} statement 
> whose exception class and error message seem to indicate different error 
> conditions. Since we are not very familiar with Hadoop's internal work flow, 
> could you please help us verify if this is a bug, i.e., will the callers have 
> trouble handling the exception, and will the users/admins have trouble 
> diagnosing the failure?
>  
> Version: Hadoop-3.1.2
> File: 
> HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
> Line: 1387
> {code:java}
> throw new IOException("replaceFile interrupted.");{code}
> Reason: An {{IOException}} can mean many different errors, while the error 
> message indicates that {{replaceFile()}} is interrupted. Will this mismatch 
> be a problem? For example, will the callers trying to handle other 
> {{IOException}} accidentally (and incorrectly) handle the interrupt? Is an 
> {{InterruptedIOException}} a better exception here?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-489644000
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 58 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1302 | trunk passed |
   | +1 | compile | 28 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 768 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | trunk passed |
   | 0 | spotbugs | 44 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 42 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 19 | hadoop-distcp in the patch failed. |
   | -1 | compile | 19 | hadoop-distcp in the patch failed. |
   | -1 | javac | 19 | hadoop-distcp in the patch failed. |
   | -0 | checkstyle | 14 | hadoop-tools/hadoop-distcp: The patch generated 1 
new + 16 unchanged - 0 fixed = 17 total (was 16) |
   | -1 | mvnsite | 19 | hadoop-distcp in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 821 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | the patch passed |
   | -1 | findbugs | 21 | hadoop-distcp in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 21 | hadoop-distcp in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3348 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/796 |
   | JIRA Issue | HADOOP-16294 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c058a8d2f85c 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 12b7059 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-distcp.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-compile-hadoop-tools_hadoop-distcp.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-compile-hadoop-tools_hadoop-distcp.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-distcp.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-distcp.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833873#comment-16833873
 ] 

Hadoop QA commented on HADOOP-16294:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 16 unchanged - 0 fixed = 17 total (was 16) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-796/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/796 |
| JIRA Issue | HADOOP-16294 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux c058a8d2f85c 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 12b7059 |
| Default Java | 1.8.0_191 |
| mvninstall | 

[jira] [Created] (HADOOP-16295) FileUtil.replaceFile() throws an IOException when it is interrupted

2019-05-06 Thread eBugs in Cloud Systems (JIRA)
eBugs in Cloud Systems created HADOOP-16295:
---

 Summary: FileUtil.replaceFile() throws an IOException when it is 
interrupted
 Key: HADOOP-16295
 URL: https://issues.apache.org/jira/browse/HADOOP-16295
 Project: Hadoop Common
  Issue Type: Bug
Reporter: eBugs in Cloud Systems


Dear Hadoop developers, we are developing a tool to detect exception-related 
bugs in Java. Our prototype has spotted the following {{throw}} statement whose 
exception class and error message seem to indicate different error conditions. 
Since we are not very familiar with Hadoop's internal work flow, could you 
please help us verify if this is a bug, i.e., will the callers have trouble 
handling the exception, and will the users/admins have trouble diagnosing the 
failure?

 

Version: Hadoop-3.1.2

File: 
HADOOP-ROOT/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java

Line: 1387
{code:java}
throw new IOException("replaceFile interrupted.");{code}
Reason: An {{IOException}} can mean many different errors, while the error 
message indicates that {{replaceFile()}} is interrupted. Will this mismatch be 
a problem? For example, will the callers try to handle other {{IOException}} 
accidentally (and incorrectly) handle the interrupt? Is an 
{{InterruptedIOException}} a better exception here?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-06 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833834#comment-16833834
 ] 

Andrew Olson commented on HADOOP-16294:
---

Opened a pull request for this,
https://github.com/apache/hadoop/pull/796

> Enable access to input options by DistCp subclasses
> ---
>
> Key: HADOOP-16294
> URL: https://issues.apache.org/jira/browse/HADOOP-16294
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Trivial
>
> In the DistCp class, the DistCpOptions are private with no getter method 
> allowing retrieval by subclasses. So a subclass would need to save its own 
> copy of the inputOptions supplied to its constructor, if it wishes to 
> override the createInputFileListing method with logic similar to the original 
> implementation, i.e. calling CopyListing#buildListing with a path and input 
> options.
> I propose adding to DistCp this method,
> {noformat}
>   protected DistCpOptions getInputOptions() {
> return inputOptions;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna opened a new pull request #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-06 Thread GitBox
noslowerdna opened a new pull request #796: HADOOP-16294: Enable access to 
input options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796
 
 
   Adding a protected-scope getter for the DistCpOptions, so that a subclass 
does not need to save its own copy of the inputOptions supplied to its 
constructor, if it wishes to override the createInputFileListing method with 
logic similar to the original implementation, i.e. calling 
CopyListing#buildListing with a path and input options.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16294) Enable access to input options by DistCp subclasses

2019-05-06 Thread Andrew Olson (JIRA)
Andrew Olson created HADOOP-16294:
-

 Summary: Enable access to input options by DistCp subclasses
 Key: HADOOP-16294
 URL: https://issues.apache.org/jira/browse/HADOOP-16294
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Reporter: Andrew Olson
Assignee: Andrew Olson


In the DistCp class, the DistCpOptions are private with no getter method 
allowing retrieval by subclasses. So a subclass would need to save its own copy 
of the inputOptions supplied to its constructor, if it wishes to override the 
createInputFileListing method with logic similar to the original 
implementation, i.e. calling CopyListing#buildListing with a path and input 
options.

I propose adding to DistCp this method,

{noformat}
  protected DistCpOptions getInputOptions() {
return inputOptions;
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-05-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833810#comment-16833810
 ] 

Steve Loughran commented on HADOOP-13656:
-

h2. Production code

LGTM

h2. docs

bq. If the `-fs` option is passed, all files in the given filepath will be 
expunged and checkpoint is created.

I think I'd prefer just to say that this changes the filesystem:

{code}

If the `-fs` option is passed, the supplied filesystem will be expunged, rather 
than the default filesystem.

For example

```
hadoop fs -expunge --immediate -fs s3a://landsat-pds/
```

{code}

h2.TestTrash

> If the `-fs` option is passed, all files in the given filepath will be 
> expunged
>   and checkpoint is created.



h3. trashShell

Have this throw `Exception` and eliminate the logic related to swallowing 
exceptions.

If you do want to log things, use an SLF4J logger. This is probably a good time 
to move the existing
loggint in the test to SLF4J

wherever you assert on a value being equal to a constant, use assertEquals and 
add a message (L176)

And put the constant first (L193)

Take a look at S3GuardToolTestHelper.and it's exec() method, which is designed 
to execute the command, show
meaningful messages on a failure/invalid result, and save the output to a byte 
array for later assertions.

I know you've just extended the existing code in there, but looking at that 
code: it's not great, and even the most recent patch HADOOP-16410 could have 
been more rigorous. Even if you leave that existing code alone, let's do better 
now.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, 
> HADOOP-13656.003.patch, HADOOP-13656.004.patch, HADOOP-13656.005.patch
>
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16221) S3Guard: fail write that doesn't update metadata store

2019-05-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833804#comment-16833804
 ] 

Steve Loughran commented on HADOOP-16221:
-

bq. The retries on a MetadataStore are pretty robust where failures should be 
pretty rare.

they are now we're handling throttling. I'd find it unlikely that you'd be in a 
situation where you have W access to S3 and yet DDB writes fail unless 
permissions or client config are broken

> S3Guard: fail write that doesn't update metadata store
> --
>
> Key: HADOOP-16221
> URL: https://issues.apache.org/jira/browse/HADOOP-16221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
>
> Right now, a failure to write to the S3Guard metadata store (e.g. DynamoDB) 
> is [merely 
> logged|https://github.com/apache/hadoop/blob/rel/release-3.1.2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2708-L2712].
>  It does not fail the S3AFileSystem write operation itself. As such, the 
> writer has no idea that anything went wrong. The implication of this is that 
> S3Guard doesn't always provide the consistency it advertises.
> For example [this 
> article|https://blog.cloudera.com/blog/2017/08/introducing-s3guard-s3-consistency-for-apache-hadoop/]
>  states:
> {quote}If a Hadoop S3A client creates or moves a file, and then a client 
> lists its directory, that file is now guaranteed to be included in the 
> listing.
> {quote}
> Unfortunately, this is sort of untrue and could result in exactly the sort of 
> problem S3Guard is supposed to avoid:
> {quote}Missing data that is silently dropped. Multi-step Hadoop jobs that 
> depend on output of previous jobs may silently omit some data. This omission 
> happens when a job chooses which files to consume based on a directory 
> listing, which may not include recently-written items.
> {quote}
> Imagine the typical multi-job Hadoop processing pipeline. Job 1 runs and 
> succeeds, but one (or more) S3Guard metadata write failed under the covers. 
> Job 2 picks up the output directory from Job 1 and runs its processing, 
> potentially seeing an inconsistent listing, silently missing some of the Job 
> 1 output files.
> S3Guard should at least provide a configuration option to fail if the 
> metadata write fails. It seems even ideally this should be the default?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833781#comment-16833781
 ] 

Gabor Bota commented on HADOOP-16279:
-

I started to work on this issue with the following principles:
 * Use the same ttl for entries and authoritative directory listing
 * Entries which are not directories can be expired. Then the returned metadata 
from the MS will be null.
 * Add two new methods {{pruneExpiredTtl()}} and {{pruneExpiredTtl(String 
keyPrefix)}} to {{MetadataStore}} interface. These methods will delete all 
expired metadata from the ms.

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-06 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r281163359
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/test/ExtraAssertions.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import org.junit.Assert;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.util.DurationInfo;
+
+import static org.apache.hadoop.fs.s3a.S3AUtils.applyLocatedFiles;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Some extra assertions for tests.
+ */
+@InterfaceAudience.Private
+public class ExtraAssertions {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+  ExtraAssertions.class);
+
+  /**
+   * Assert that the number of files in a destination matches that expected.
+   * @param text text to use in the message
+   * @param fs filesystem
+   * @param path path to list (recursively)
+   * @param expected expected count
+   * @throws IOException IO problem
+   */
+  public static void assertFileCount(String text, FileSystem fs,
+  Path path, long expected)
+  throws IOException {
+List files = new ArrayList<>();
+try (DurationInfo ignored = new DurationInfo(LOG, false,
+"Counting files in %s", path)) {
+  applyLocatedFiles(fs.listFiles(path, true),
+  (status) -> files.add(status.getPath().toString()));
+}
+long actual = files.size();
+if (actual != expected) {
+  String ls = files.stream().collect(Collectors.joining("\n"));
+  Assert.fail(text + ": expected " + expected + " files in " + path
+  + " but got " + actual + "\n" + ls);
+}
+  }
+
+  /**
+   * Assert that a string contains a piece of text.
+   * @param text text to can.
+   * @param contained text to look for.
+   */
+  public static void assertTextContains(String text, String contained) {
+assertTrue("string \"" + contained + "\" not found in \"" + text + "\"",
+text != null && text.contains(contained));
+  }
+
+  /**
+   * If the condition is met, throw an AssertionError with the message
+   * and any nested exception.
+   * @param condition condition
+   * @param message text to use in the exception
+   * @param cause a (possibly null) throwable to init the cause with
+   * @throws AssertionError with the text and throwable if condition == true.
+   */
+  public static void failIf(boolean condition,
+  String message,
+  Throwable cause) {
+if (condition) {
+  ContractTestUtils.fail(message, cause);
+}
+  }
+
+  /**
+   * If the condition is met, throw an AssertionError with the message
+   * and any nested exception.
+   * @param condition condition
+   * @param message text to use in the exception
+   * @param cause a (possibly null) throwable to init the cause with
+   * @throws AssertionError with the text and throwable if condition == true.
+   */
+  public static void failUnless(boolean condition,
+  String message,
+  Throwable cause) {
+failIf(!condition, message, cause);
+  }
+
+  /**
+   * Extract the inner cause of an exception.
+   * @param expected  expected class of the cuse
 
 Review comment:
   fixed.
   Interesting that the IntelliJ spell checker didn't flag this up, even though 
I haven't accidentally added "cuse" to the dictionary


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833705#comment-16833705
 ] 

Hadoop QA commented on HADOOP-16238:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967911/HADOOP-16238-005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 689f9f1add3c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d70c8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16225/testReport/ |
| Max. process+thread count | 1452 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833683#comment-16833683
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
19s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-14951 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967897/HADOOP-14951-13.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b794ebd00616 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 

[GitHub] [hadoop] hadoop-yetus commented on issue #775: HADOOP-16184. S3Guard: Handle OOB deletions and creation of a file wh…

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #775: HADOOP-16184. S3Guard: Handle OOB 
deletions and creation of a file wh…
URL: https://github.com/apache/hadoop/pull/775#issuecomment-489574522
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1044 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 721 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | the patch passed |
   | +1 | findbugs | 61 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 278 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3548 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-775/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/775 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e056a11c7442 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d70c8c |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-775/3/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-775/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833668#comment-16833668
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} root: The patch generated 0 new + 111 unchanged - 1 
fixed = 111 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/6/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/664 |
| JIRA Issue | HADOOP-14951 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4c1c9ad59391 

[GitHub] [hadoop] hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs implementation customizable, with an …

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #664: [HADOOP-14951] Make the KMSACLs 
implementation customizable, with an …
URL: https://github.com/apache/hadoop/pull/664#issuecomment-489569589
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1026 | trunk passed |
   | +1 | compile | 1066 | trunk passed |
   | +1 | checkstyle | 142 | trunk passed |
   | +1 | mvnsite | 116 | trunk passed |
   | +1 | shadedclient | 989 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 100 | trunk passed |
   | 0 | spotbugs | 172 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 218 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 1009 | the patch passed |
   | +1 | javac | 1009 | the patch passed |
   | +1 | checkstyle | 134 | root: The patch generated 0 new + 111 unchanged - 
1 fixed = 111 total (was 112) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 94 | the patch passed |
   | +1 | findbugs | 234 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 206 | hadoop-kms in the patch passed. |
   | -1 | unit | 4685 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 11134 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/664 |
   | JIRA Issue | HADOOP-14951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4c1c9ad59391 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d70c8c |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/6/testReport/ |
   | Max. process+thread count | 5007 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-664/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-06 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833654#comment-16833654
 ] 

Elek, Marton edited comment on HADOOP-16091 at 5/6/19 9:41 AM:
---

bq. This is how maven is designed to allow each sub-module to build 
independently. This allows reducing iteration time on each component instead of 
doing the full build each time. The k8s-dev solution has a conflict of 
interests in maven design. Part of Maven design is to release one binary per 
project using maven release:release plugin.

I wouldn't like to create an additional thread here, but I think 
release:release goal is not part of the fundamental design of maven. This is 
just a maven plugin which can be replaced better release plugin or other 
processes. (I would say that life-cycle/goal bindings or profiles are part of 
the design)

BTW I think the "release:release" plugin has some design problems, but it's an 
other story (But I prefer to not use it, for example because the created tags 
are pushed too early).

bq. By using the tar layout stitching temp space, it saves space during the 
build. However, it creates a inseparable process for building tarball and 
docker in maven because the temp directory is not in maven cache. This means 
the tarball and docker image must be built together, and only one of them can 
be deposited to maven repository. Hence, it takes more time to reiterate just 
the docker part. It is not good for developer that only work on docker and not 
the tarball. 

I am not sure if I understood your concern but I think tar file creation and 
docker file creation can be easily separated by moving the tar file creation to 
the dist profile and keep k8s-dev profile as is. I am +1 for this suggestion.

Do you have any other problem with the k8s-dev approach?

bq. Symlink can be used to make /opt/ozone > /opt/ozone-${project.version}. 
This is the practice that Hadoop use to avoid versioned directory while 
maintain ability to swap binaries. We should keep symlink practice for config 
files to reference version neutral location. I think we have agreement on the 
base image. This also allow us to use RUN directive to make any post tarball 
process required in docker build.

Let me more precious: Hadoop doesn't use symlinks AFAIK. Ambari, bigtop and 
Hortonworks/Cloudera distributions use symlinks to manage multiple versions of 
hadoop.

Sorry if it seems to be pedant. I learned from [Eugenia 
Cheng|http://eugeniacheng.com/math/books/] that the difference between pedantry 
and precision is illumination. I wrote it just because I think it's very 
important that the symlinks are introduced to manage version in *on-prem* 
clusters.

I think the containerized word is different. For multiple versions we need to 
use different containers therefore we don't need to add version *inside* the 
containers any more.

Usually I don't think the examples are good arguments (I think it's more 
important to find the right solution instead of following existing practices) 
but I checked spark images (which can be created by bin/docker-image-tool.sh 
from the spark distribution) and they also use /opt/spark. (But I a fine to use 
/opt/apache/ozone if you prefer it. I like the apache subdir.)


was (Author: elek):

bq. This is how maven is designed to allow each sub-module to build 
independently. This allows reducing iteration time on each component instead of 
doing the full build each time. The k8s-dev solution has a conflict of 
interests in maven design. Part of Maven design is to release one binary per 
project using maven release:release plugin.

I wouldn't like to create an additional thread here, but I think 
release:release goal is not part of the fundamental design of maven. This is 
just a maven plugin which can be replaced better release plugin or other 
processes. (I would say that life-cycle/goal bindings or profiles are part of 
the design)

BTW I think the "release:release" plugin has some design problems, but it's an 
other story (But I prefer to not use it, for example because the created tags 
are pushed too early).

bq. By using the tar layout stitching temp space, it saves space during the 
build. However, it creates a inseparable process for building tarball and 
docker in maven because the temp directory is not in maven cache. This means 
the tarball and docker image must be built together, and only one of them can 
be deposited to maven repository. Hence, it takes more time to reiterate just 
the docker part. It is not good for developer that only work on docker and not 
the tarball. 

I am not sure if I understood your concern but I think tar file creation and 
docker file creation can be easily separated by moving the tar file creation to 
the dist profile and keep k8s-dev profile as is. I am +1 for this suggestion.

Do you have any other problem with the k8s-dev approach?

bq. Symlink 

[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-05-06 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833654#comment-16833654
 ] 

Elek, Marton commented on HADOOP-16091:
---


bq. This is how maven is designed to allow each sub-module to build 
independently. This allows reducing iteration time on each component instead of 
doing the full build each time. The k8s-dev solution has a conflict of 
interests in maven design. Part of Maven design is to release one binary per 
project using maven release:release plugin.

I wouldn't like to create an additional thread here, but I think 
release:release goal is not part of the fundamental design of maven. This is 
just a maven plugin which can be replaced better release plugin or other 
processes. (I would say that life-cycle/goal bindings or profiles are part of 
the design)

BTW I think the "release:release" plugin has some design problems, but it's an 
other story (But I prefer to not use it, for example because the created tags 
are pushed too early).

bq. By using the tar layout stitching temp space, it saves space during the 
build. However, it creates a inseparable process for building tarball and 
docker in maven because the temp directory is not in maven cache. This means 
the tarball and docker image must be built together, and only one of them can 
be deposited to maven repository. Hence, it takes more time to reiterate just 
the docker part. It is not good for developer that only work on docker and not 
the tarball. 

I am not sure if I understood your concern but I think tar file creation and 
docker file creation can be easily separated by moving the tar file creation to 
the dist profile and keep k8s-dev profile as is. I am +1 for this suggestion.

Do you have any other problem with the k8s-dev approach?

bq. Symlink can be used to make /opt/ozone > /opt/ozone-${project.version}. 
This is the practice that Hadoop use to avoid versioned directory while 
maintain ability to swap binaries. We should keep symlink practice for config 
files to reference version neutral location. I think we have agreement on the 
base image. This also allow us to use RUN directive to make any post tarball 
process required in docker build.

Let me more precious: Hadoop doesn't use symlinks AFAIK. Ambari, bigtop and 
Hortonworks/Cloudera distributions use symlinks to manage multiple versions of 
hadoop.

Sorry if it seems to be pedant. I learned from Eugenia Cheng that the 
difference between pedantry and precision is illumination. I wrote it just 
because I think it's very important that the symlinks are introduced to manage 
version in *on-prem* clusters.

I think the containerized word is different. For multiple versions we need to 
use different containers therefore we don't need to add version *inside* the 
containers any more.

Usually I don't think the examples are good arguments (I think it's more 
important to find the right solution instead of following existing practices) 
but I checked spark images (which can be created by bin/docker-image-tool.sh 
from the spark distribution) and they also use /opt/spark. (But I a fine to use 
/opt/apache/ozone if you prefer it. I like the apache subdir.)

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making 

[jira] [Updated] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-06 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated HADOOP-16238:
--
Attachment: HADOOP-16238-005.patch

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch, HADOOP-16238-005.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16238) Add the possbility to set SO_REUSEADDR in IPC Server Listener

2019-05-06 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833641#comment-16833641
 ] 

Peter Bacsko commented on HADOOP-16238:
---

I uploaded patch v5 where the default is "true".

> Add the possbility to set SO_REUSEADDR in IPC Server Listener
> -
>
> Key: HADOOP-16238
> URL: https://issues.apache.org/jira/browse/HADOOP-16238
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Minor
> Attachments: HADOOP-16238-001.patch, HADOOP-16238-002.patch, 
> HADOOP-16238-003.patch, HADOOP-16238-004.patch, HADOOP-16238-005.patch
>
>
> Currently we can't enable SO_REUSEADDR in the IPC Server. In some 
> circumstances, this would be desirable, see explanation here:
> [https://developer.ibm.com/tutorials/l-sockpit/#pitfall-3-address-in-use-error-eaddrinuse-]
> Rarely it also causes problems in a test case 
> {{TestMiniMRClientCluster.testRestart}}:
> {noformat}
> 2019-04-04 11:21:31,896 INFO [main] service.AbstractService 
> (AbstractService.java:noteFailure(273)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state 
> STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [test-host:35491] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException
>  at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:138)
>  at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
>  at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:178)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1244)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:355)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:127)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:493)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>  at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:312)
>  at 
> org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster.serviceStart(MiniMRYarnCluster.java:210)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>  at 
> org.apache.hadoop.mapred.MiniMRYarnClusterAdapter.restart(MiniMRYarnClusterAdapter.java:73)
>  at 
> org.apache.hadoop.mapred.TestMiniMRClientCluster.testRestart(TestMiniMRClientCluster.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat}
>  
> At least for testing, having this socket option enabled is benefical. We 
> could enable this with a new property like {{ipc.server.reuseaddr}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16279:
---

Assignee: Gabor Bota

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behaviour than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2019-05-06 Thread GitBox
hadoop-yetus commented on issue #766: YARN-9509: Added a configuration for 
admins to be able to capped per-container cpu usage based on a multiplier
URL: https://github.com/apache/hadoop/pull/766#issuecomment-489555655
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 461 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1150 | trunk passed |
   | +1 | compile | 544 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 141 | trunk passed |
   | +1 | shadedclient | 962 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 116 | trunk passed |
   | 0 | spotbugs | 93 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 309 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 127 | the patch passed |
   | +1 | compile | 581 | the patch passed |
   | +1 | javac | 581 | the patch passed |
   | -0 | checkstyle | 77 | hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 219 unchanged - 0 fixed = 224 total (was 219) |
   | +1 | mvnsite | 135 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 792 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 114 | the patch passed |
   | +1 | findbugs | 320 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 51 | hadoop-yarn-api in the patch passed. |
   | +1 | unit | 222 | hadoop-yarn-common in the patch passed. |
   | +1 | unit | 1259 | hadoop-yarn-server-nodemanager in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7528 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/766 |
   | JIRA Issue | YARN-9509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux d780d5be3ec7 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1d70c8c |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/3/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/3/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14951) KMSACL implementation is not configurable

2019-05-06 Thread Zsombor Gegesy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsombor Gegesy updated HADOOP-14951:

Attachment: HADOOP-14951-13.patch

> KMSACL implementation is not configurable
> -
>
> Key: HADOOP-14951
> URL: https://issues.apache.org/jira/browse/HADOOP-14951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Zsombor Gegesy
>Assignee: Zsombor Gegesy
>Priority: Major
>  Labels: key-management, kms
> Attachments: HADOOP-14951-10.patch, HADOOP-14951-11.patch, 
> HADOOP-14951-12.patch, HADOOP-14951-13.patch, HADOOP-14951-9.patch
>
>
> Currently, it is not possible to customize KMS's key management, if KMSACLs 
> behaviour is not enough. If an external key management solution is used, that 
> would need a higher level API, where it can decide, if the given operation is 
> allowed, or not.
>  For this to achieve, it would be a solution, to introduce a new interface, 
> which could be implemented by KMSACLs - and also other KMS - and a new 
> configuration point could be added, where the actual interface implementation 
> could be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org