[GitHub] [hadoop] umamaheswararao commented on pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


umamaheswararao commented on pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#issuecomment-650009704


   For some reason its unable to post test results in Github.
   No related test failures: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2092/3/testReport/
   
   No check style errors:
   20:51:12  

   20:51:12  

   20:51:12   checkstyle: patch
   20:51:12  

   20:51:12  

   20:51:12  
   20:51:12  
   20:51:28  cd 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2092/src
   20:51:28  /usr/bin/mvn --batch-mode checkstyle:checkstyle 
-Dcheckstyle.consoleOutput=true -Ptest-patch -DskipTests -Ptest-patch > 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2092/out/buildtool-patch-checkstyle-root.txt
 2>&1
   20:54:04  
   20:54:04  root: The patch generated 2 new + 90 unchanged - 1 fixed = 92 
total (was 91)
   
   Hey @ayushtkn , do you have some idea whats going on here.
   Its actually running all tests and checks but somehow it's failing to post 
the results.
   
   Could not update commit status, please check if your scan credentials belong 
to a member of the organization or a collaborator of the repository and 
repo:status scope is selected
   
   ```
   GitHub has been notified of this commit’s build result
   
   Timeout has been exceeded
   java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at hudson.remoting.Request.call(Request.java:177)
at hudson.remoting.Channel.call(Channel.java:956)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-649994808


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 28s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  65m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2076 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 68b7bc6f2e06 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a8fd73b273 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/3/testReport/ |
   | Max. process+thread count | 344 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, ple

[GitHub] [hadoop] ishaniahuja commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


ishaniahuja commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-649990701


   javadoc failure is happening in trunk (causing yetus -1). JIRA: 
https://issues.apache.org/jira/browse/HADOOP-16862



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-06-25 Thread GitBox


mehakmeet commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-649965672


   I have changed the positioning of bytesReadFromBuffer, due to a bug found 
that this counter was actually incrementing ReadAhead buffer rather than local 
buffer(ReadAhead counters in a separate patch). Null statistics test works due 
to @mukund-thakur 's help.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


ayushtkn commented on a change in pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#discussion_r445953460



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -606,4 +602,166 @@ public void 
testLSOnLinkParentWhereMountLinkMatchesWithAFileUnderFallback()
   }
 }
   }
+
+  /**
+   * Tests that directory making should be successful when the parent directory
+   * is same as the existent fallback directory. The new dir should be created
+   * in fallback instead failing.
+   */
+  @Test
+  public void 
testMkdirsOfLinkParentWithFallbackLinkWithSameMountDirectoryTree()
+  throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/user1/hive/warehouse/test");
+  Path test = Path.mergePaths(fallbackTarget, p);
+  assertFalse(fsTarget.exists(test));
+  assertTrue(vfs.mkdirs(p));
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests that directory making should be successful when attempting to create
+   * the root directory as it's already exist.
+   */
+  @Test
+  public void testMkdirsOfRootWithFallbackLinkAndMountWithSameDirTree()
+  throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
+ConfigUtil
+.addLink(conf, "/user1", new Path(targetTestRoot.toString()).toUri());
+Path dir1 = new Path(targetTestRoot, "fallbackDir/user1");
+fsTarget.mkdirs(dir1);
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path p = new Path("/");
+  Path test = Path.mergePaths(fallbackTarget, p);
+  assertTrue(fsTarget.exists(test));
+  assertTrue(vfs.mkdirs(p));
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory under the root.
+   */
+  @Test
+  public void testMkdirsOfNewDirWithOutMatchingToMountOrFallbackDirTree()
+  throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  // user2 does not exist in fallback
+  Path p = new Path("/user2");
+  Path test = Path.mergePaths(fallbackTarget, p);
+  assertFalse(fsTarget.exists(test));
+  assertTrue(vfs.mkdirs(p));
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests that when the parent dirs does not exist in fallback but the parent
+   * dir is same as mount internal directory, then we create parent structure
+   * (mount internal directory tree structure) in fallback.
+   */
+  @Test
+  public void testMkdirsWithFallbackLinkWithMountPathMatchingDirExist()
+  throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
+ConfigUtil.addLink(conf, "/user1/hive",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  //user1 does not exist in fallback
+  Path immediateLevelToInternalDir = new Path("/user1/test");
+  Path test = Path.mergePaths(fallbackTarget, immediateLevelToInternalDir);
+  assertFalse(fsTarget.exists(test));
+  assertTrue(vfs.mkdirs(immediateLevelToInternalDir));
+  assertTrue(fsTarget.exists(test));
+}
+  }
+
+  /**
+   * Tests that when the parent dirs does not exist in fallback but the
+   * immediate parent dir is not same as mount internal directory, then we
+   * create parent structure (mount internal directory tree structure) in
+   * fallback.
+   */
+  @Test
+  public void 
testMkdirsOfDeepT

[GitHub] [hadoop] virajith commented on pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


virajith commented on pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#issuecomment-649934530


   Thanks for the review @umamaheswararao . I fixed the checkstyle and test 
failures in the last commit. Will wait for yetus to come back.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajith commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


virajith commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445946962



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -236,6 +238,61 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
 }
   }
 
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * Check that "viewfs:/" paths without authority can work when the
+   * default mount table name is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER, "test");
+// /local/test
+Path localDir = new Path(LOCAL_FOLDER, "test");
+FileStatus[] expectedStatus;
+
+try (FileSystem fs = FileSystem.get(conf)) {
+  fs.mkdirs(hdfsDir); // /HDFSUser/testfile

Review comment:
   Fixed this

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
##
@@ -41,12 +41,17 @@
* then the hadoop default value (/user) is used.
*/
   public static final String CONFIG_VIEWFS_HOMEDIR = "homedir";
-  
+
+  /**
+   * Config key to specify the name of the default mount table.
+   */
+  public static final String CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY = 
"fs.viewfs.mounttable.default.name.key";

Review comment:
   thanks for checking - removed this.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
##
@@ -150,6 +150,39 @@ DFSAdmin commands with View File System Overload Scheme
 
 Please refer to the [HDFSCommands 
Guide](./HDFSCommands.html#dfsadmin_with_ViewFsOverloadScheme)
 
+Accessing paths without authority
+---

Review comment:
   removed this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


umamaheswararao commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445927183



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
##
@@ -150,6 +150,39 @@ DFSAdmin commands with View File System Overload Scheme
 
 Please refer to the [HDFSCommands 
Guide](./HDFSCommands.html#dfsadmin_with_ViewFsOverloadScheme)
 
+Accessing paths without authority
+---

Review comment:
   Probably you can removed additional '_'  chars after heading above.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -236,6 +238,61 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
 }
   }
 
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * Check that "viewfs:/" paths without authority can work when the
+   * default mount table name is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER, "test");
+// /local/test
+Path localDir = new Path(LOCAL_FOLDER, "test");
+FileStatus[] expectedStatus;
+
+try (FileSystem fs = FileSystem.get(conf)) {
+  fs.mkdirs(hdfsDir); // /HDFSUser/testfile

Review comment:
   Tiny nit: in the comment: // /HDFSUser/testfile --> // /HDFSUser/test ?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
##
@@ -41,12 +41,17 @@
* then the hadoop default value (/user) is used.
*/
   public static final String CONFIG_VIEWFS_HOMEDIR = "homedir";
-  
+
+  /**
+   * Config key to specify the name of the default mount table.
+   */
+  public static final String CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY = 
"fs.viewfs.mounttable.default.name.key";

Review comment:
   Thank you for adding doc. It looks good to me. 
   You may want to remove "public static final" in above constant as they are 
implicitly same in interface. checkstyle would be happy with that. :-) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17089) WASB: Update azure-storage-java SDK

2020-06-25 Thread Thomas Marqardt (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145971#comment-17145971
 ] 

Thomas Marqardt commented on HADOOP-17089:
--

branch-2.10:

commit 0d4f9c778967ce0f83663c63389987335d47c3ea
Author: Thomas Marquardt 
Date: Wed Jun 24 18:37:25 2020 +

> WASB: Update azure-storage-java SDK
> ---
>
> Key: HADOOP-17089
> URL: https://issues.apache.org/jira/browse/HADOOP-17089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Critical
> Fix For: 2.10.1, 3.3.1
>
>
> WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in 
> the Azure Storage Java SDK that can cause the results of a list blobs 
> operation to appear empty.  This causes the Filesystem listStatus and similar 
> APIs to return empty results.  This has been seen in Spark work loads when 
> jobs use more than one executor core. 
> See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
> bug in the Azure Storage SDK.
> This issue can cause data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17089) WASB: Update azure-storage-java SDK

2020-06-25 Thread Thomas Marqardt (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marqardt updated HADOOP-17089:
-
Fix Version/s: 2.10.1

> WASB: Update azure-storage-java SDK
> ---
>
> Key: HADOOP-17089
> URL: https://issues.apache.org/jira/browse/HADOOP-17089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Critical
> Fix For: 2.10.1, 3.3.1
>
>
> WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in 
> the Azure Storage Java SDK that can cause the results of a list blobs 
> operation to appear empty.  This causes the Filesystem listStatus and similar 
> APIs to return empty results.  This has been seen in Spark work loads when 
> jobs use more than one executor core. 
> See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
> bug in the Azure Storage SDK.
> This issue can cause data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145965#comment-17145965
 ] 

Hadoop QA commented on HADOOP-17083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
46s{color} | {color:green} branch-2.10 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
1s{color} | {color:red} root in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
12s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
29s{color} | {color:green} branch-2.10 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-project in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-common in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs-client in branch-2.10 failed with JDK 
Oracle Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs-rbf in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-mapreduce-client-core in branch-2.10 failed 
with JDK Oracle Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-common in branch-2.10 failed with JDK 
Oracle Corporation-1.7.0_95-b00. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.10 
failed with JDK Oracle Corporation-1.7.0_95-b00. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
3s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
20s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  8m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn in branch-2.10 has 6 
extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.10 has 
14 extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
18s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 
extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 
has 1 extant findb

[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


umamaheswararao commented on a change in pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#discussion_r445925144



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##
@@ -1139,6 +1139,27 @@ public void mkdir(final Path dir, final FsPermission 
permission,
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String child = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(p, child);
+try {
+  linkedFallbackFs.mkdir(dirToCreate, permission, createParent);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+}
+  }
+

Review comment:
   1) Yes, I agree somehow we kept Viewfs.java on lowlight, probably due t 
it's low usage :-) I have added tests.
   2) a) This is a good point. I just checked that ViewFs#mkdir is already 
throwing IOE. I think we can just throw IOE out. So, that users would know what 
happened.
   b) Another missed was, even in positive case, I was not returning 
before. Added return after mkdir success.
   c) We always pass createParent as true here. We don't need to actually 
worry parent exist or not, because parent is exist in mount Internal dir, 
that's why it got resolved to InternalDirViewFS#mkdir. So, passing createParent 
as true always in fallback case.
   3) Added tests to cover the cases with ViewFs#mkdir with createParentDir 
true or false, with one level and multipleLevels (means to check recursive 
creation. Both seems to be working.)
   Let's discuss if this behavior make sense to you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


umamaheswararao commented on a change in pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#discussion_r445925144



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##
@@ -1139,6 +1139,27 @@ public void mkdir(final Path dir, final FsPermission 
permission,
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String child = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(p, child);
+try {
+  linkedFallbackFs.mkdir(dirToCreate, permission, createParent);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+}
+  }
+

Review comment:
   Thanks a lot @ayushtkn for review!
   1) Yes, I agree somehow we kept Viewfs.java on lowlight, probably due t it's 
low usage :-) I have added tests.
   2) a) This is a good point. I just checked that ViewFs#mkdir is already 
throwing IOE. I think we can just throw IOE out. So, that users would know what 
happened.
   b) Another missed was, even in positive case, I was not returning 
before. Added return after mkdir success.
   c) We always pass createParent as true here. We don't need to actually 
worry parent exist or not, because parent is exist in mount Internal dir, 
that's why it got resolved to InternalDirViewFS#mkdir. So, passing createParent 
as true always in fallback case.
   3) Added tests to cover the cases with ViewFs#mkdir with createParentDir 
true or false, with one level and multipleLevels (means to check recursive 
creation. Both seems to be working.)
   Let's discuss if this behavior make sense to you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-06-25 Thread GitBox


mehakmeet commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-649901048


   Thanks, @mukund-thakur. Brilliant debugging :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#issuecomment-649893051


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  6s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  20m 14s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 48s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   4m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 33s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 36s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  25m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  1s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  22m  1s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 23s |  root: The patch generated 6 new 
+ 142 unchanged - 2 fixed = 148 total (was 144)  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 47s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 51s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   6m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 39s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  94m 48s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 292m  2s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
   |   | hadoop.fs.viewfs.TestViewFsLocalFs |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.viewfs.TestViewFsHdfs |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2100/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2100 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 518e2a54329e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revi

[GitHub] [hadoop] hadoop-yetus commented on pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#issuecomment-649891148


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 29s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  22m 17s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 34s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 31s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 45s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  25m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  1s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  23m  1s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 29s |  root: The patch generated 4 new 
+ 142 unchanged - 2 fixed = 146 total (was 144)  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 39s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   7m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  11m 20s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  78m 16s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   1m  7s |  The patch generated 16 ASF License 
warnings.  |
   |  |   | 294m 54s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsLocalFs |
   |   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
   |   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
   |   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
   |   | hadoop.fs.viewfs.TestViewFsHdfs |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/j

[jira] [Updated] (HADOOP-17089) WASB: Update azure-storage-java SDK

2020-06-25 Thread Thomas Marqardt (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marqardt updated HADOOP-17089:
-
Description: 
WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in the 
Azure Storage Java SDK that can cause the results of a list blobs operation to 
appear empty.  This causes the Filesystem listStatus and similar APIs to return 
empty results.  This has been seen in Spark work loads when jobs use more than 
one executor core. 

See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
bug in the Azure Storage SDK.

This issue can cause data loss.

  was:
WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in the 
Azure Storage Java SDK that can cause the results of a list blobs operation to 
appear empty.  This causes the Filesystem listStatus and similar APIs to return 
empty results.  This has been seen in Spark work loads when jobs use more than 
one executor core. 

See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
bug in the Azure Storage SDK.

   Priority: Critical  (was: Major)

This issue can cause data loss.

> WASB: Update azure-storage-java SDK
> ---
>
> Key: HADOOP-17089
> URL: https://issues.apache.org/jira/browse/HADOOP-17089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Critical
> Fix For: 3.3.1
>
>
> WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in 
> the Azure Storage Java SDK that can cause the results of a list blobs 
> operation to appear empty.  This causes the Filesystem listStatus and similar 
> APIs to return empty results.  This has been seen in Spark work loads when 
> jobs use more than one executor core. 
> See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
> bug in the Azure Storage SDK.
> This issue can cause data loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] NickyYe commented on a change in pull request #2095: HDFS-15312. Apply umask when creating directory by WebHDFS

2020-06-25 Thread GitBox


NickyYe commented on a change in pull request #2095:
URL: https://github.com/apache/hadoop/pull/2095#discussion_r445903899



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
##
@@ -361,6 +378,23 @@ public RouterRpcServer(Configuration configuration, Router 
router,
 this.nnProto = new RouterNamenodeProtocol(this);
 this.clientProto = new RouterClientProtocol(conf, this);
 this.routerProto = new RouterUserProtocol(this);
+
+long dnCacheExpire = conf.getTimeDuration(

Review comment:
   This PR is closed, it was by mistake.
   
   I have updated the link to https://github.com/apache/hadoop/pull/2096.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-25 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17079:

Attachment: HADOOP-17079.003.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch
>
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


ayushtkn commented on a change in pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#discussion_r445862114



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##
@@ -1139,6 +1139,27 @@ public void mkdir(final Path dir, final FsPermission 
permission,
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String child = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(p, child);
+try {
+  linkedFallbackFs.mkdir(dirToCreate, permission, createParent);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+}
+  }
+

Review comment:
   Thanx @umamaheswararao for the work here,
   * Are the changes here in viewFs.java covered anywhere in test? If not will 
be good to cover this change as well
   * Regarding the IOE, we are logging the IOE and then throwing 
`readOnlyMountTable("mkdir", dir);` instead of the actual exception. Is it 
intended? It actually suppress the actual reason, There is a difference in 
behavior between the implementation here and at ViewFileSystem, in case of 
exception there we handle it and don't throw `readOnlyMountTable("mkdir", 
dir);` rather a response, though mkdirs and mkdir behave differently and 
returning false there still might work, but here a person can not fix the issue 
based on the end exception he receives.
   
   * I think the `createParent` needs to be tackled, when the parent is there 
in the mount table, in that case we have to explicitly change it to true. I 
tried tweaking one of your test give a check :
   ```
@ Test
 public void 
testMkdirsOfDeepTreeWithFallbackLinkAndMountPathMatchingDirExist()
 throws Exception {
   Configuration conf = new Configuration();
   conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
   ConfigUtil.addLink(conf, "/user1/hive",
   new Path(targetTestRoot.toString() + "/").toUri());
   Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
   fsTarget.mkdirs(fallbackTarget);
   ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
   AbstractFileSystem vfs = AbstractFileSystem.get(viewFsDefaultClusterUri,
   conf);
 //user1 does not exist in fallback
 Path multipleLevelToInternalDir = new Path("/user1/test");
 Path test = Path.mergePaths(fallbackTarget, 
multipleLevelToInternalDir);
 assertFalse(fsTarget.exists(test));
 // Creating /user1/test
   // Parent /user1 Exists.
   assertNotNull(vfs.getFileStatus(new Path("/user1")));
   
   // Creating /user1/test should be success, with createParent false, as
   // parent /user1 exists.
 vfs.mkdir(multipleLevelToInternalDir,
 FsPermission.getDirDefault(),false); // This throws Exception...
 assertTrue(fsTarget.exists(test));
   
 }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-25 Thread GitBox


ayushtkn commented on a change in pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092#discussion_r445862114



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
##
@@ -1139,6 +1139,27 @@ public void mkdir(final Path dir, final FsPermission 
permission,
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String child = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(p, child);
+try {
+  linkedFallbackFs.mkdir(dirToCreate, permission, createParent);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+}
+  }
+

Review comment:
   Thanx @umamaheswararao for the work here,
   * Are the changes here in viewFs.java covered anywhere in test? If not will 
be good to cover this change as well
   * Regarding the IOE, we are logging the IOE and then throwing 
`readOnlyMountTable("mkdir", dir);` instead of the actual exception. Is it 
intended? It actually suppress the actual reason, There is a difference in 
behavior between the implementation here and at ViewFileSystem, in case of 
exception there we handle it and don't throw `readOnlyMountTable("mkdir", 
dir);` rather a response, though mkdirs and mkdir behave differently and 
returning false there still might work, but here a person can not fix the issue 
based on the end exception he receives.
   
   * I think the `createParent` needs to be tackled, when the parent is there 
in the mount table, in that case we have to explicitly change it to true. I 
tried tweaking one of your test give a check :
   `
@ Test
 public void 
testMkdirsOfDeepTreeWithFallbackLinkAndMountPathMatchingDirExist()
 throws Exception {
   Configuration conf = new Configuration();
   conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
   ConfigUtil.addLink(conf, "/user1/hive",
   new Path(targetTestRoot.toString() + "/").toUri());
   Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
   fsTarget.mkdirs(fallbackTarget);
   ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
   AbstractFileSystem vfs = AbstractFileSystem.get(viewFsDefaultClusterUri,
   conf);
 //user1 does not exist in fallback
 Path multipleLevelToInternalDir = new Path("/user1/test");
 Path test = Path.mergePaths(fallbackTarget, 
multipleLevelToInternalDir);
 assertFalse(fsTarget.exists(test));
 // Creating /user1/test
   // Parent /user1 Exists.
   assertNotNull(vfs.getFileStatus(new Path("/user1")));
   
   // Creating /user1/test should be success, with createParent false, as
   // parent /user1 exists.
 vfs.mkdir(multipleLevelToInternalDir,
 FsPermission.getDirDefault(),false); // This throws Exception...
 assertTrue(fsTarget.exists(test));
   
 }
   `





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-649806589


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
11 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-azure: The 
patch generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bbdb53e74935 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a8fd73b273 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/testReport/ |
   | Max. process+thread count | 422 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/12/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apach

[jira] [Commented] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145808#comment-17145808
 ] 

Ahmed Hussein commented on HADOOP-17094:


[~weichiu] I opted to stay on 2.9.10 to avoid breaking downstream. Also, 2.10 
would require upgrading the `maven-shade-plugin`.
Can you please take a look at the change, and merge it to branch-2.10?

> vulnerabilities reported in jackson and jackson-databind in branch-2.10
> ---
>
> Key: HADOOP-17094
> URL: https://issues.apache.org/jira/browse/HADOOP-17094
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17094-branch-2.10.001.patch
>
>
> There are known vulnerabilities in the 
> com.fasterxml.jackson.core:jackson-databind package [,2.9.10.5).
> [List of 
> vulnerabilities|https://snyk.io/vuln/maven:com.fasterxml.jackson.core%3Ajackson-databind].
> Upgrading jackson and jackson-databind to 2.10 should get rid of those 
> vulnerabilities.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajith commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


virajith commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445808907



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * and check that "viewfs:/" paths work without specifying authority when 
then default mount table name

Review comment:
   fixed.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * and check that "viewfs:/" paths work without specifying authority when 
then default mount table name
+   * is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER + "/test");
+// /local/test
+Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+try (ViewFileSystemOverloadScheme fs = (ViewFileSystemOverloadScheme) 
FileSystem.get(conf)) {
+  fs.mkdirs(hdfsDir); // /HDFSUser/testfile
+  fs.mkdirs(localDir); // /local/test
+}
+
+FileStatus[] expectedStatus;
+try (FileSystem fs = FileSystem.get(conf)) {
+  expectedStatus = fs.listStatus(new Path("/"));
+}
+
+// check for viewfs path without authority
+Path viewFsRootPath = new Path("viewfs:/");
+try {
+  viewFsRootPath.getFileSystem(conf);
+  Assert.fail("Mount table with authority default should not be 
initialized");
+} catch (IOException e) {
+  assertTrue(e.getMessage().contains("Empty Mount table in config for 
viewfs://default/"));
+}
+
+// set the name of the default mount table here and subsequent calls 
should succeed.
+conf.set(Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY, 
defaultFSURI.getAuthority());
+
+try (FileSystem fs = viewFsRootPath.getFileSystem(conf)) {
+  FileStatus[] status = fs.listStatus(viewFsRootPath);
+  // compare only the final components of the paths as full paths have 
different schemes (hdfs:/ vs. viewfs:/).
+  List expectedPaths = Arrays.stream(expectedStatus).map(s -> 
s.getPath().getName()).collect(Collectors.toList());

Review comment:
   fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajith commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


virajith commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445808825



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * and check that "viewfs:/" paths work without specifying authority when 
then default mount table name
+   * is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER + "/test");

Review comment:
   fixed this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajith commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


virajith commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445808525



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
##
@@ -41,12 +41,17 @@
* then the hadoop default value (/user) is used.
*/
   public static final String CONFIG_VIEWFS_HOMEDIR = "homedir";
-  
+
+  /**
+   * Config key to specify the name of the default mount table.
+   */
+  public static final String CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY = 
"fs.viewfs.mounttable.default.name.key";

Review comment:
   Added details in ViewFSOverloadScheme.md.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2095: HDFS-15312. Apply umask when creating directory by WebHDFS

2020-06-25 Thread GitBox


goiri commented on a change in pull request #2095:
URL: https://github.com/apache/hadoop/pull/2095#discussion_r445804683



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
##
@@ -361,6 +378,23 @@ public RouterRpcServer(Configuration configuration, Router 
router,
 this.nnProto = new RouterNamenodeProtocol(this);
 this.clientProto = new RouterClientProtocol(conf, this);
 this.routerProto = new RouterUserProtocol(this);
+
+long dnCacheExpire = conf.getTimeDuration(

Review comment:
   This does not really match the description right? I think you just took 
it from the other JIRA?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17087) Add EC flag to stat commands

2020-06-25 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-17087:
--
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Thanx [~wanghongbing] for the agreement. 
Resolving this.

> Add EC flag to stat commands
> 
>
> Key: HADOOP-17087
> URL: https://issues.apache.org/jira/browse/HADOOP-17087
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-17087.001.patch
>
>
> We currently do not have a brief way to judge an ec file.  {{hdfs fsck}}  can 
> do but shows too much information. Neither {{du}} nor {{ls}} can accurately 
> judge the ec file. 
> So I added ec flag to stat cli.
> old result: 
> {code:java}
> $ hadoop fs -stat "%F" /user/ec/ec.txt
> regular file
> $ hadoop fs -stat "%F" /user/rep/rep.txt 
> regular file
> {code}
> new result:
> {code:java}
> $ hadoop fs -stat "%F" /user/ec/ec.txt 
> erasure coding file
> $ hadoop fs -stat "%F" /user/rep/rep.txt 
> replica file
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


ishaniahuja commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-649778447


   namespace, rest version:= 2018-11-09
   
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   --
   
   
   non namespace, old rest version:= 2018-11-09
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 245
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   
   
   ---
   
   
   namepsace, rest version(2019-12-12), fs.azure.test.appendblob.enabled=true
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   
   namepsace, rest version(2019-12-12),
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145787#comment-17145787
 ] 

Hadoop QA commented on HADOOP-17094:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.10 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-project in branch-2.10 failed with JDK Oracle 
Corporation-1.7.0_95-b00. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/17003/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006457/HADOOP-17094-branch-2.10.001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux 001435fc82a0 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-2.10 / e81002b |
| Default Java | Private Build-1.8.0_252-8u252-b09

[jira] [Created] (HADOOP-17095) findbugs warnings building branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17095:
--

 Summary: findbugs warnings building branch-2.10
 Key: HADOOP-17095
 URL: https://issues.apache.org/jira/browse/HADOOP-17095
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.10.0, 2.10.1
Reporter: Ahmed Hussein


precommit build for branch-2.10 generates finbugs warnings in several 
components.
 This is an umbrella to analyze those warnings and fix/ignore them as necessary.

 
|{color:#FF}-1{color}|{color:#FF}findbugs{color}|{color:#FF}2m 
1s{color}|{color:#FF}hadoop-common-project/hadoop-common in branch-2.10 has 
14 extant findbugs warnings.{color}|
|{color:#FF}-1{color}|{color:#FF}findbugs{color}|{color:#FF}2m 
54s{color}|{color:#FF}hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 
extant findbugs warnings.{color}|
|{color:#FF}-1{color}|{color:#FF}findbugs{color}|{color:#FF}2m 
12s{color}|{color:#FF}hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 
has 1 extant findbugs warnings.{color}|
|{color:#FF}-1{color}|{color:#FF}findbugs{color}|{color:#FF}1m 
35s{color}|{color:#FF}hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
 in branch-2.10 has 3 extant findbugs warnings.{color}|
|{color:#FF}-1{color}|{color:#FF}findbugs{color}|{color:#FF}1m 
50s{color}|{color:#FF}hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
branch-2.10 has 1 extant findbugs warnings.{color}|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17083:
---
Attachment: HADOOP-17083-branch-2.10.005.patch

> Update guava to 27.0-jre in hadoop branch-2.10
> --
>
> Key: HADOOP-17083
> URL: https://issues.apache.org/jira/browse/HADOOP-17083
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.10.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17083-branch-2.10.001.patch, 
> HADOOP-17083-branch-2.10.002.patch, HADOOP-17083-branch-2.10.003.patch, 
> HADOOP-17083-branch-2.10.004.patch, HADOOP-17083-branch-2.10.005.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].
>  
> The upgrade should not affect the version of java used. branch-2.10 still 
> sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17094:
---
Attachment: HADOOP-17094-branch-2.10.001.patch
Status: Patch Available  (was: In Progress)

> vulnerabilities reported in jackson and jackson-databind in branch-2.10
> ---
>
> Key: HADOOP-17094
> URL: https://issues.apache.org/jira/browse/HADOOP-17094
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17094-branch-2.10.001.patch
>
>
> There are known vulnerabilities in the 
> com.fasterxml.jackson.core:jackson-databind package [,2.9.10.5).
> [List of 
> vulnerabilities|https://snyk.io/vuln/maven:com.fasterxml.jackson.core%3Ajackson-databind].
> Upgrading jackson and jackson-databind to 2.10 should get rid of those 
> vulnerabilities.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r445745291



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -144,6 +145,10 @@
   private final IdentityTransformerInterface identityTransformer;
   private final AbfsPerfTracker abfsPerfTracker;
 
+  /**
+   * The set of directories where we should store files as append blobs.
+   */
+  private Set appendBlobDirSet;

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17094 started by Ahmed Hussein.
--
> vulnerabilities reported in jackson and jackson-databind in branch-2.10
> ---
>
> Key: HADOOP-17094
> URL: https://issues.apache.org/jira/browse/HADOOP-17094
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> There are known vulnerabilities in the 
> com.fasterxml.jackson.core:jackson-databind package [,2.9.10.5).
> [List of 
> vulnerabilities|https://snyk.io/vuln/maven:com.fasterxml.jackson.core%3Ajackson-databind].
> Upgrading jackson and jackson-databind to 2.10 should get rid of those 
> vulnerabilities.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145688#comment-17145688
 ] 

Wei-Chiu Chuang commented on HADOOP-17094:
--

+1 to update to jackson-databind 2.10. 

2.9.10 and 2.10 are almost API compatible. But 2.10 leaks a new dependency so 
it might break HBase.

> vulnerabilities reported in jackson and jackson-databind in branch-2.10
> ---
>
> Key: HADOOP-17094
> URL: https://issues.apache.org/jira/browse/HADOOP-17094
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> There are known vulnerabilities in the 
> com.fasterxml.jackson.core:jackson-databind package [,2.9.10.5).
> [List of 
> vulnerabilities|https://snyk.io/vuln/maven:com.fasterxml.jackson.core%3Ajackson-databind].
> Upgrading jackson and jackson-databind to 2.10 should get rid of those 
> vulnerabilities.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17094) vulnerabilities reported in jackson and jackson-databind in branch-2.10

2020-06-25 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17094:
--

 Summary: vulnerabilities reported in jackson and jackson-databind 
in branch-2.10
 Key: HADOOP-17094
 URL: https://issues.apache.org/jira/browse/HADOOP-17094
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.10.0, 2.10.1
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


There are known vulnerabilities in the 
com.fasterxml.jackson.core:jackson-databind package [,2.9.10.5).

[List of 
vulnerabilities|https://snyk.io/vuln/maven:com.fasterxml.jackson.core%3Ajackson-databind].

Upgrading jackson and jackson-databind to 2.10 should get rid of those 
vulnerabilities.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


umamaheswararao commented on pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#issuecomment-649710533


   Seems like there are some other tests to handle this cases from Yetus? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


umamaheswararao commented on a change in pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#discussion_r445698036



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * and check that "viewfs:/" paths work without specifying authority when 
then default mount table name
+   * is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER + "/test");
+// /local/test
+Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+try (ViewFileSystemOverloadScheme fs = (ViewFileSystemOverloadScheme) 
FileSystem.get(conf)) {
+  fs.mkdirs(hdfsDir); // /HDFSUser/testfile
+  fs.mkdirs(localDir); // /local/test
+}
+
+FileStatus[] expectedStatus;
+try (FileSystem fs = FileSystem.get(conf)) {
+  expectedStatus = fs.listStatus(new Path("/"));
+}
+
+// check for viewfs path without authority
+Path viewFsRootPath = new Path("viewfs:/");
+try {
+  viewFsRootPath.getFileSystem(conf);
+  Assert.fail("Mount table with authority default should not be 
initialized");
+} catch (IOException e) {
+  assertTrue(e.getMessage().contains("Empty Mount table in config for 
viewfs://default/"));
+}
+
+// set the name of the default mount table here and subsequent calls 
should succeed.
+conf.set(Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY, 
defaultFSURI.getAuthority());
+
+try (FileSystem fs = viewFsRootPath.getFileSystem(conf)) {
+  FileStatus[] status = fs.listStatus(viewFsRootPath);
+  // compare only the final components of the paths as full paths have 
different schemes (hdfs:/ vs. viewfs:/).
+  List expectedPaths = Arrays.stream(expectedStatus).map(s -> 
s.getPath().getName()).collect(Collectors.toList());

Review comment:
   Nit: Need formatting of these lines

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
##
@@ -41,12 +41,17 @@
* then the hadoop default value (/user) is used.
*/
   public static final String CONFIG_VIEWFS_HOMEDIR = "homedir";
-  
+
+  /**
+   * Config key to specify the name of the default mount table.
+   */
+  public static final String CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY = 
"fs.viewfs.mounttable.default.name.key";

Review comment:
   You may want to add some documentation in ViewFSOverloadScheme.md docs?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * and check that "viewfs:/" paths work without specifying authority when 
then default mount table name
+   * is set correctly.
+   */
+  @Test
+  public void testAccessViewFsPathWithoutAuthority() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+
+// /HDFSUser/test
+Path hdfsDir = new Path(HDFS_USER_FOLDER + "/test");

Review comment:
   Nit: you may create path object as new Path(Path, child). so we can 
avoid str concat. I know lot of existing tests have the same concat. Recently 
Started using this way :-)

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
##
@@ -235,6 +237,57 @@ public void testListStatusOnNonMountedPath() throws 
Exception {
   Assert.fail("It should fail as no mount link with /nonMount");
 }
   }
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   *

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-649606358


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
23 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 57s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 42s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 20s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 26s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 50s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  19m 50s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 1964 unchanged - 1 
fixed = 1965 total (was 1965)  |
   | +1 :green_heart: |  compile  |  17m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  17m 24s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1858 unchanged - 1 
fixed = 1859 total (was 1859)  |
   | -0 :warning: |  checkstyle  |   2m 50s |  root: The patch generated 34 new 
+ 160 unchanged - 22 fixed = 194 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 21 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 20s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 169m 16s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.statistics.TestDynamicIOStatistics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2069 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux d8ec44c64db6 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | 

[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection

2020-06-25 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17144978#comment-17144978
 ] 

zhengchenyu commented on HADOOP-16254:
--

May I take over this issue, or we fix it together? I think 
HADOOP-16254.004.patch are not proper. And I revise this patch, and test in our 
test clusetr, then work well.

> Add proxy address in IPC connection
> ---
>
> Key: HADOOP-16254
> URL: https://issues.apache.org/jira/browse/HADOOP-16254
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, 
> HADOOP-16254.004.patch
>
>
> In order to support data locality of RBF, we need to add new field about 
> client hostname in the RPC headers of Router protocol calls.
>  clientHostname represents hostname of client and forward by Router to 
> Namenode to support data locality friendly. See more [RBF Data Locality 
> Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf]
>  in HDFS-13248 and [maillist 
> vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-06-25 Thread GitBox


mukund-thakur commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-649564035


   The test fails only when parallel tests are enabled. 
   mvn -T 1C clean verify -Dtest=none 
-Dit.test=ITestAbfsInputStreamStatistics#testWithNullStreamStatistics
   -- Succeeds 
   mvn -T 1C -Dparallel-tests=abfs clean verify 
-Dtest=noneDit.test=ITestAbfsInputStreamStatistics#testWithNullStreamStatistics
   -- Fails.
   
   The reason is during parallel tests paths are created under fork directories 
but while getting the path status absolute path is used. 
   Fix is change :
   AbfsRestOperation abfsRestOperation = fs.getAbfsClient().getPathStatus(
 "/test/" + getMethodName(), false);
   to 
   AbfsRestOperation abfsRestOperation = fs.getAbfsClient().getPathStatus(
 nullStatFilePath.toUri().getPath(), false);
   
   Interesting debugging indeed. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-06-25 Thread GitBox


mukund-thakur commented on a change in pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#discussion_r445414974



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##
@@ -4181,79 +4181,114 @@ public LocatedFileStatus next() throws IOException {
 Path path = qualify(f);
 LOG.debug("listFiles({}, {})", path, recursive);
 try {
-  // if a status was given, that is used, otherwise
-  // call getFileStatus, which triggers an existence check
-  final S3AFileStatus fileStatus = status != null
-  ? status
-  : (S3AFileStatus) getFileStatus(path);
-  if (fileStatus.isFile()) {
+  // if a status was given and it is a file.
+  if (status != null && status.isFile()) {
 // simple case: File
 LOG.debug("Path is a file");
 return new Listing.SingleStatusRemoteIterator(
-toLocatedFileStatus(fileStatus));
-  } else {
-// directory: do a bulk operation
-String key = maybeAddTrailingSlash(pathToKey(path));
-String delimiter = recursive ? null : "/";
-LOG.debug("Requesting all entries under {} with delimiter '{}'",
-key, delimiter);
-final RemoteIterator cachedFilesIterator;
-final Set tombstones;
-boolean allowAuthoritative = allowAuthoritative(f);
-if (recursive) {
-  final PathMetadata pm = metadataStore.get(path, true);
-  // shouldn't need to check pm.isDeleted() because that will have
-  // been caught by getFileStatus above.
-  MetadataStoreListFilesIterator metadataStoreListFilesIterator =
-  new MetadataStoreListFilesIterator(metadataStore, pm,
-  allowAuthoritative);
-  tombstones = metadataStoreListFilesIterator.listTombstones();
-  // if all of the below is true
-  //  - authoritative access is allowed for this metadatastore for 
this directory,
-  //  - all the directory listings are authoritative on the client
-  //  - the caller does not force non-authoritative access
-  // return the listing without any further s3 access
-  if (!forceNonAuthoritativeMS &&
-  allowAuthoritative &&
-  metadataStoreListFilesIterator.isRecursivelyAuthoritative()) {
-S3AFileStatus[] statuses = S3Guard.iteratorToStatuses(
-metadataStoreListFilesIterator, tombstones);
-cachedFilesIterator = listing.createProvidedFileStatusIterator(
-statuses, ACCEPT_ALL, acceptor);
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
-  cachedFilesIterator = metadataStoreListFilesIterator;
-} else {
-  DirListingMetadata meta =
-  S3Guard.listChildrenWithTtl(metadataStore, path, ttlTimeProvider,
-  allowAuthoritative);
-  if (meta != null) {
-tombstones = meta.listTombstones();
-  } else {
-tombstones = null;
-  }
-  cachedFilesIterator = listing.createProvidedFileStatusIterator(
-  S3Guard.dirMetaToStatuses(meta), ACCEPT_ALL, acceptor);
-  if (allowAuthoritative && meta != null && meta.isAuthoritative()) {
-// metadata listing is authoritative, so return it directly
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
+toLocatedFileStatus(status));
+  }
+  // Assuming the path to be a directory
+  // do a bulk operation.
+  RemoteIterator listFilesAssumingDir =
+  getListFilesAssumingDir(path,
+  recursive,
+  acceptor,
+  collectTombstones,
+  forceNonAuthoritativeMS);
+  // If there are no list entries present, we
+  // fallback to file existence check as the path
+  // can be a file or empty directory.
+  if (!listFilesAssumingDir.hasNext()) {
+final S3AFileStatus fileStatus = (S3AFileStatus) getFileStatus(path);

Review comment:
   We can't do this as we will need a list for an directory which is empty. 
Else we will get FNFE.   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable

2020-06-25 Thread GitBox


hadoop-yetus commented on pull request #2100:
URL: https://github.com/apache/hadoop/pull/2100#issuecomment-649370125


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 57s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  7s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 25s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 58s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 48s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 17s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 36s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 53s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 53s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 40s |  root: The patch generated 14 new 
+ 142 unchanged - 2 fixed = 156 total (was 144)  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 49s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   5m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 22s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  96m  0s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 290m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsLocalFs |
   |   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
   |   | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.viewfs.TestViewFsHdfs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2100/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2100 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0b4c9f09ac49 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 84110d850e2 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubunt

[jira] [Updated] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-06-25 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17086:
--
Fix Version/s: 3.4.0

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializatio

[jira] [Updated] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-06-25 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17086:
--
Component/s: fs/azure

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
>     at 
> org.codehaus.jackson.map.deser.Std

[jira] [Updated] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-06-25 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17086:
--
Affects Version/s: (was: 3.4.0)
   3.3.0

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
>  

[jira] [Updated] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-06-25 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17086:
--
Affects Version/s: 3.4.0

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.4.0
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializationContext.unknow

[jira] [Updated] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-06-25 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17086:
--
Summary: ABFS: Fix the parsing errors in ABFS Driver with creation Time 
(being returned in ListPath)  (was: Parsing errors in ABFS Driver with creation 
Time (being returned in ListPath))

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPr

[GitHub] [hadoop] bilaharith commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


bilaharith commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r445367412



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -389,6 +423,12 @@ private synchronized void 
flushWrittenBytesToServiceAsync() throws IOException {
 
   private synchronized void flushWrittenBytesToServiceInternal(final long 
offset,
   final boolean retainUncommitedData, final boolean isClose) throws 
IOException {
+
+// flush is not called for appendblob as is not needed
+if (this.isAppendBlob) {
+  return;

Review comment:
   Resolved





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


bilaharith commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r445364600



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -144,6 +145,10 @@
   private final IdentityTransformerInterface identityTransformer;
   private final AbfsPerfTracker abfsPerfTracker;
 
+  /**
+   * The set of directories where we should store files as append blobs.
+   */
+  private Set appendBlobDirSet;

Review comment:
   Would be better to add a new line since the next line is a constructor.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-25 Thread GitBox


snvijaya commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r445355225



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -378,6 +428,7 @@ private synchronized void 
flushWrittenBytesToService(boolean isClose) throws IOE
 flushWrittenBytesToServiceInternal(position, false, isClose);
   }
 
+

Review comment:
   unnecessary additional new line.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
##
@@ -47,6 +47,7 @@
 
   // Default upload and download buffer size
   public static final int DEFAULT_WRITE_BUFFER_SIZE = 8 * ONE_MB;  // 8 MB
+  public static final int APPENDBLOB_MAX_WRITE_BUFFER_SIZE = 4 * ONE_MB;  // 8 
MB

Review comment:
   Minor. Fix comment.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -337,7 +344,6 @@ public void processResponse(final byte[] buffer, final int 
offset, final int len
 if (this.isTraceEnabled) {
   startTime = System.nanoTime();
 }
-

Review comment:
   Undo. newline needed after a block.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -185,13 +189,18 @@ void execute() throws AzureBlobFileSystemException {
   try {
 LOG.debug("Retrying REST operation {}. RetryCount = {}",
 operationType, retryCount);
+

Review comment:
   remove newline

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -185,13 +189,18 @@ void execute() throws AzureBlobFileSystemException {
   try {
 LOG.debug("Retrying REST operation {}. RetryCount = {}",
 operationType, retryCount);
+
 Thread.sleep(client.getRetryPolicy().getRetryInterval(retryCount));
   } catch (InterruptedException ex) {
 Thread.currentThread().interrupt();
   }
 }
 
 if (result.getStatusCode() >= HttpURLConnection.HTTP_BAD_REQUEST) {
+  if (this.isAppendBlobAppend && retryCount > 0 && 
result.getStorageErrorCode().equals("InvalidQueryParameterValue")) {

Review comment:
   Why is it that for Http Status code 400 and above, exception is 
suppressed ? Can you please add code comments for the reason.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -430,10 +487,15 @@ private synchronized void shrinkWriteOperationQueue() 
throws IOException {
   }
 
   private void waitForTaskToComplete() throws IOException {
+

Review comment:
   remove newline

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -367,7 +418,6 @@ private synchronized void 
flushWrittenBytesToService(boolean isClose) throws IOE
 throw new FileNotFoundException(ex.getMessage());
   }
 }
-

Review comment:
   Undo. new line needed.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -389,6 +440,12 @@ private synchronized void 
flushWrittenBytesToServiceAsync() throws IOException {
 
   private synchronized void flushWrittenBytesToServiceInternal(final long 
offset,
   final boolean retainUncommitedData, final boolean isClose) throws 
IOException {
+

Review comment:
   remove newline

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
##
@@ -272,7 +272,8 @@ public AbfsRestOperation deleteFilesystem() throws 
AzureBlobFileSystemException
   }
 
   public AbfsRestOperation createPath(final String path, final boolean isFile, 
final boolean overwrite,
-  final String permission, final String 
umask) throws AzureBlobFileSystemException {
+  final String permission, final String 
umask,
+  final boolean appendBlob) throws 
AzureBlobFileSystemException {

Review comment:
   Minor. boolean flag better named as isAppendBlob





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org