[GitHub] [hadoop] ayushtkn commented on a change in pull request #2107: HDFS-15430. create should work when parent dir is internalDir and fallback configured.

2020-07-03 Thread GitBox


ayushtkn commented on a change in pull request #2107:
URL: https://github.com/apache/hadoop/pull/2107#discussion_r449744920



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -765,4 +766,151 @@ public void 
testMkdirsShouldReturnFalseWhenFallbackFSNotAvailable()
   assertTrue(fsTarget.exists(test));
 }
   }
+
+  /**
+   * Tests that the create file should be successful when the parent directory
+   * is same as the existent fallback directory. The new file should be created
+   * in fallback.
+   */
+  @Test
+  public void testCreateFileOnInternalMountDirWithSameDirTreeExistInFallback()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+Path dir1 = new Path(fallbackTarget, "user1/hive/warehouse/partition-0");
+fsTarget.mkdirs(dir1);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path vfsTestFile = new Path("/user1/hive/warehouse/test.file");
+  Path testFileInFallback = Path.mergePaths(fallbackTarget, vfsTestFile);
+  assertFalse(fsTarget.exists(testFileInFallback));
+  assertTrue(fsTarget.exists(testFileInFallback.getParent()));
+  vfs.create(vfsTestFile).close();
+  assertTrue(fsTarget.exists(testFileInFallback));
+}
+  }
+
+  /**
+   * Tests the making of a new directory which is not matching to any of
+   * internal directory.
+   */
+  @Test
+  public void testCreateNewFileWithOutMatchingToMountDirOrFallbackDirPath()
+  throws Exception {
+Configuration conf = new Configuration();
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path vfsTestFile = new Path("/user2/test.file");
+  Path testFileInFallback = Path.mergePaths(fallbackTarget, vfsTestFile);
+  assertFalse(fsTarget.exists(testFileInFallback));
+  // user2 does not exist in fallback
+  assertFalse(fsTarget.exists(testFileInFallback.getParent()));
+  vfs.create(vfsTestFile).close();
+  // /user2/test.file should be created in fallback
+  assertTrue(fsTarget.exists(testFileInFallback));
+}
+  }
+
+  /**
+   * Tests the making of a new file on root which is not matching to any of
+   * fallback files on root.
+   */
+  @Test
+  public void testCreateFileOnRootWithFallbackEnabled() throws Exception {
+Configuration conf = new Configuration();
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);
+
+ConfigUtil.addLink(conf, "/user1/hive/",
+new Path(targetTestRoot.toString()).toUri());
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path vfsTestFile = new Path("/test.file");
+  Path testFileInFallback = Path.mergePaths(fallbackTarget, vfsTestFile);
+  assertFalse(fsTarget.exists(testFileInFallback));
+  vfs.create(vfsTestFile).close();
+  // /test.file should be created in fallback
+  assertTrue(fsTarget.exists(testFileInFallback));
+}
+  }
+
+  /**
+   * Tests the create of a file on root where the path is matching to an
+   * existing file on fallback's file on root.
+   */
+  @Test (expected = FileAlreadyExistsException.class)
+  public void testCreateFileOnRootWithFallbackWithFileAlreadyExist()
+  throws Exception {
+Configuration conf = new Configuration();
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+Path testFile = new Path(fallbackTarget, "test.file");
+// pre-creating test file in fallback.
+fsTarget.create(testFile).close();
+
+ConfigUtil.addLink(conf, "/user1/hive/",
+new Path(targetTestRoot.toString()).toUri());
+ConfigUtil.addLinkFallback(conf, fallbackTarget.toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsDefaultClusterUri, conf)) {
+  Path vfsTestFile = new Path("/test.file");
+  assertTrue(fsTarget.exists(testFile));
+  vfs.create(vfsTestFile, false).close();
+}
+  }
+
+  /**
+   * Tests the creating of a file where the path is same as mount link path.
+   */
+  @Test(expected= FileAlreadyExistsException.class)
+  public void testCreateFileWhereThePathIsSameAsItsMountLinkPath()
+  throws Exception {
+Configuration conf = new Configuration();
+Path fallbackTarget = new Path(targetTestRoot, "fallbackDir");
+fsTarget.mkdirs(fallbackTarget);

[GitHub] [hadoop] umamaheswararao opened a new pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.

2020-07-03 Thread GitBox


umamaheswararao opened a new pull request #2121:
URL: https://github.com/apache/hadoop/pull/2121


   https://issues.apache.org/jira/browse/HDFS-15449



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2119: [HDFS-15451] Do not discard non-initial block report for provided storage

2020-07-03 Thread GitBox


hadoop-yetus commented on pull request #2119:
URL: https://github.com/apache/hadoop/pull/2119#issuecomment-653717340


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  93m 12s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 183m 35s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2119/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2119 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 91406d543538 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e0cededfbd2 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2119/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2119/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2119/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2119/2/testReport/ |
   | Max. process+thread count | 4255 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://build

[GitHub] [hadoop] shanyu commented on pull request #2119: [HDFS-15451] Do not discard non-initial block report for provided storage

2020-07-03 Thread GitBox


shanyu commented on pull request #2119:
URL: https://github.com/apache/hadoop/pull/2119#issuecomment-653700426


   Added a unit test case testSafeModeWithProvidedStorageBR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16754) Fix docker failed to build yetus/hadoop

2020-07-03 Thread Yuanliang Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151121#comment-17151121
 ] 

Yuanliang Zhang commented on HADOOP-16754:
--

[~pingsutw]

I just try to install Hadoop 3.2.1 on my Mac and also encounter this problem. 
And I manually edit the docker file as what you did in the patch and solve the 
problem. Guess you can also provide the patch for 3.2.1

> Fix docker failed to build yetus/hadoop
> ---
>
> Key: HADOOP-16754
> URL: https://issues.apache.org/jira/browse/HADOOP-16754
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kevin Su
>Assignee: Kevin Su
>Priority: Blocker
> Fix For: 3.3.0, 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HADOOP-16754.001.patch, HADOOP-16754.002.patch, 
> HADOOP-16754.003.patch, HADOOP-16754.branch-2.10-001.patch, 
> HADOOP-16754.branch-2.8-001.patch, HADOOP-16754.branch-2.9-001.patch, 
> HADOOP-16754.branch-3.1-001.patch
>
>
> Docker failed to build yetus/hadoop
> [https://builds.apache.org/job/hadoop-multibranch/job/PR-1745/1/console]
> error message : 
> {code:java}
> 07:56:02 Cannot add PPA: 'ppa:~jonathonf/ubuntu/ghc-8.0.2'.
> 07:56:02 The user named '~jonathonf' has no PPA named 'ubuntu/ghc-8.0.2'
> 07:56:02 Please choose from the following available PPAs:
> 07:56:02 'ansible': Ansible
> 07:56:02 'aria2': aria2
> 07:56:02 'atslang': ATS2 programming language
> 07:56:02 'backports': Backport collection{code}
> ~jonathonf/ubuntu/ghc-8.0.2 not found in jonathonf's PPA, need to change 
> other PPA
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] crossfire commented on a change in pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-07-03 Thread GitBox


crossfire commented on a change in pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#discussion_r449700748



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
##
@@ -1062,6 +1062,38 @@ public void testRelativeIncludes() throws Exception {
 new File(new File(relConfig).getParent()).delete();
   }
 
+  @Test
+  public void testRelativeIncludesWithLoadingViaUri() throws Exception {
+tearDown();
+File configFile = new File("./tmp/test-config.xml");
+File configFile2 = new File("./tmp/test-config2.xml");
+
+new File(configFile.getParent()).mkdirs();
+out = new BufferedWriter(new FileWriter(configFile2));
+startConfig();
+appendProperty("a", "b");
+endConfig();
+
+out = new BufferedWriter(new FileWriter(configFile));
+startConfig();
+// Add the relative path instead of the absolute one.
+startInclude(configFile2.getName());
+endInclude();
+appendProperty("c", "d");
+endConfig();
+
+// verify that the includes file contains all properties
+Path fileResource = new Path(configFile.toURI());
+conf.addResource(fileResource);
+assertEquals(conf.get("a"), "b");

Review comment:
   @steveloughran Thanks for the review! You mean we need to write here 
like that, right? (considering `expected` and `actual` order)
   ```java
   assertEquals("b", conf.get("a"));
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17110) Replace Guava Preconditions to avoid Guava dependency

2020-07-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151070#comment-17151070
 ] 

Steve Loughran commented on HADOOP-17110:
-

we add our own with the same names somewhere, maybe a new "noguava" package.

there's an Objects.requireNonNull method we can use, though it's error message 
generation is a bit fiddlier/dynamic l-expression

> Replace Guava Preconditions to avoid Guava dependency
> -
>
> Key: HADOOP-17110
> URL: https://issues.apache.org/jira/browse/HADOOP-17110
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> By far, one of the most painful replacement in hadoop. There are two options:
> # Using Apache commons
> # Using Java wrapper without dependency on third party.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Preconditions' in project with 
> mask '*.java'
> Found Occurrences  (577 usages found)
> org.apache.hadoop.conf  (2 usages found)
> Configuration.java  (1 usage found)
> 108 import com.google.common.base.Preconditions;
> ReconfigurableBase.java  (1 usage found)
> 22 import com.google.common.base.Preconditions;
> org.apache.hadoop.crypto  (7 usages found)
> AesCtrCryptoCodec.java  (1 usage found)
> 23 import com.google.common.base.Preconditions;
> CryptoInputStream.java  (1 usage found)
> 33 import com.google.common.base.Preconditions;
> CryptoOutputStream.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> CryptoStreamUtils.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> JceAesCtrCryptoCodec.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> OpensslAesCtrCryptoCodec.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> OpensslCipher.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> org.apache.hadoop.crypto.key  (2 usages found)
> JavaKeyStoreProvider.java  (1 usage found)
> 21 import com.google.common.base.Preconditions;
> KeyProviderCryptoExtension.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> org.apache.hadoop.crypto.key.kms  (3 usages found)
> KMSClientProvider.java  (1 usage found)
> 83 import com.google.common.base.Preconditions;
> LoadBalancingKMSClientProvider.java  (1 usage found)
> 54 import com.google.common.base.Preconditions;
> ValueQueue.java  (1 usage found)
> 36 import com.google.common.base.Preconditions;
> org.apache.hadoop.crypto.key.kms.server  (5 usages found)
> KeyAuthorizationKeyProvider.java  (1 usage found)
> 35 import com.google.common.base.Preconditions;
> KMS.java  (1 usage found)
> 20 import com.google.common.base.Preconditions;
> KMSAudit.java  (1 usage found)
> 24 import com.google.common.base.Preconditions;
> KMSWebApp.java  (1 usage found)
> 29 import com.google.common.base.Preconditions;
> MiniKMS.java  (1 usage found)
> 29 import com.google.common.base.Preconditions;
> org.apache.hadoop.crypto.random  (1 usage found)
> OpensslSecureRandom.java  (1 usage found)
> 25 import com.google.common.base.Preconditions;
> org.apache.hadoop.fs  (19 usages found)
> ByteBufferUtil.java  (1 usage found)
> 29 import com.google.common.base.Preconditions;
> ChecksumFileSystem.java  (1 usage found)
> 32 import com.google.common.base.Preconditions;
> FileContext.java  (1 usage found)
> 68 import com.google.common.base.Preconditions;
> FileEncryptionInfo.java  (2 usages found)
> 27 import static 
> com.google.common.base.Preconditions.checkArgument;
> 28 import static 
> com.google.common.base.Preconditions.checkNotNull;
> FileSystem.java  (2 usages found)
> 86 import com.google.common.base.Preconditions;
> 91 import static 
> com.google.common.base.Preconditions.checkArgument;
> FileSystemStorageStatistics.java  (1 usage found)
> 23 import com.google.common.base.Preconditions;
> FSDataOutputStreamBuilder.java  (1 usage found)
> 31 import static 
> com.google.common.base.Preconditions.checkNotNull;
> FSInputStream.java  (1 usage found)
> 24 import com.google.common.base.Preconditions;
> FsUrlConnection.java  (1 usage found)
> 27 import com.google.common.base.Preconditions;
> Globa

[jira] [Commented] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-07-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151061#comment-17151061
 ] 

Steve Loughran commented on HADOOP-16830:
-

[~lucacanali] can you look at the latest PR? 
https://github.com/apache/hadoop/pull/2069

I can use it to collect/aggregate stats across workers, marshall as JSON and 
save in the _SUCCESS file.

The big limitation is that without thread local stats contexts, we don't get as 
much information as we can about performance, especially 
reading/seeking/network throttling &c. Somehow we are going to need to do that. 
But not yet. At least here we can start, especially if the ORC/Parquet readers 
collect their stats from all the streams they read, *and* something collects 
those.

I promise I will collect stats on IO work performed across multiple threads on 
behalf of a caller, if people commit to writing the wiring up to retrieve and 
aggregate that 

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-07-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17150942#comment-17150942
 ] 

Hudson commented on HADOOP-16961:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18403 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18403/])
HADOOP-16961. ABFS: Adding metrics to AbfsInputStream (#2076) (github: rev 
3b5c9a90c07e6360007f3f4aa357aa665b47ca3a)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsInputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatistics.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatisticsImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java


> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-07-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16961:
---

Assignee: Mehakmeet Singh  (was: Gabor Bota)

> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Mehakmeet Singh
>Priority: Major
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI

2020-07-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151066#comment-17151066
 ] 

Steve Loughran commented on HADOOP-17088:
-

the check for isRestricted already executes before this, so I don't see it 
opening up access mori

> Failed to load Xinclude files with relative path in case of loading conf via 
> URI
> 
>
> Key: HADOOP-17088
> URL: https://issues.apache.org/jira/browse/HADOOP-17088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yushi Hayasaka
>Priority: Major
>
> When we create a configuration file, which load a external XML file with 
> relative path, and try to load it via calling `Configuration.addResource` 
> with `Path(URI)`, we got an error, which failed to load a external XML, after 
> https://issues.apache.org/jira/browse/HADOOP-14216 is merged.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: java.io.IOException: 
> Fetch fail on include for 'mountTable.xml' with no fallback while loading 
> 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896)
>   at com.company.test.Main.main(Main.java:29)
> Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' 
> with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
>   ... 4 more
> {noformat}
> The cause is that the URI is passed as string to java.io.File constructor and 
> File does not support the file URI, so my suggestion is trying to convert 
> from string to URI at first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-07-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16961.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

currently in 3.4, though we plan to backport to branch-3.3 once some backport 
conflict is resolvec

> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer

2020-07-03 Thread Krzysztof Adamski (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151009#comment-17151009
 ] 

Krzysztof Adamski commented on HADOOP-17112:


maybe you would know [~ste...@apache.org] ?

> whitespace not allowed in paths when saving files to s3a via committer
> --
>
> Key: HADOOP-17112
> URL: https://issues.apache.org/jira/browse/HADOOP-17112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Krzysztof Adamski
>Priority: Major
> Attachments: image-2020-07-03-16-08-52-340.png
>
>
> When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
> against hadoop-3.2 with the following specs
>  --conf 
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
>  
>  --conf 
> spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
>  
>  --conf 
> spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
>  
>  --conf spark.hadoop.fs.s3a.committer.name=partitioned 
>  --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
>  we are unable to save the file with whitespace character in the path. It 
> works fine without.
> I was looking into the recent commits with regards to qualifying the path, 
> but couldn't find anything obvious. Is this a known bug?
> When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
> against hadoop-3.2 with the following specs
> --conf 
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
>   
> --conf 
> spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
>  
> --conf 
> spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
>  
> --conf spark.hadoop.fs.s3a.committer.name=partitioned 
> --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
> we are unable to save the file with whitespace character in the path. It 
> works fine without.
> I was looking into the recent commits with regards to qualifying the path, 
> but couldn't find anything obvious. Is this a known bug?
> !image-2020-07-03-16-08-52-340.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer

2020-07-03 Thread Krzysztof Adamski (Jira)
Krzysztof Adamski created HADOOP-17112:
--

 Summary: whitespace not allowed in paths when saving files to s3a 
via committer
 Key: HADOOP-17112
 URL: https://issues.apache.org/jira/browse/HADOOP-17112
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Krzysztof Adamski
 Attachments: image-2020-07-03-16-08-52-340.png

When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
against hadoop-3.2 with the following specs
--conf 
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
  
--conf 
spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
 
--conf 
spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
 
--conf spark.hadoop.fs.s3a.committer.name=partitioned 
--conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
we are unable to save the file with whitespace character in the path. It works 
fine without.

I was looking into the recent commits with regards to qualifying the path, but 
couldn't find anything obvious. Is this a known bug?

!image-2020-07-03-16-08-15-852.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer

2020-07-03 Thread Krzysztof Adamski (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krzysztof Adamski updated HADOOP-17112:
---
Description: 
When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
against hadoop-3.2 with the following specs
 --conf 
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
 
 --conf 
spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
 
 --conf 
spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
 
 --conf spark.hadoop.fs.s3a.committer.name=partitioned 
 --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
 we are unable to save the file with whitespace character in the path. It works 
fine without.

I was looking into the recent commits with regards to qualifying the path, but 
couldn't find anything obvious. Is this a known bug?

When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
against hadoop-3.2 with the following specs
--conf 
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
  
--conf 
spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
 
--conf 
spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
 
--conf spark.hadoop.fs.s3a.committer.name=partitioned 
--conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
we are unable to save the file with whitespace character in the path. It works 
fine without.

I was looking into the recent commits with regards to qualifying the path, but 
couldn't find anything obvious. Is this a known bug?

!image-2020-07-03-16-08-52-340.png!

  was:
When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
against hadoop-3.2 with the following specs
--conf 
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
  
--conf 
spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
 
--conf 
spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
 
--conf spark.hadoop.fs.s3a.committer.name=partitioned 
--conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
we are unable to save the file with whitespace character in the path. It works 
fine without.

I was looking into the recent commits with regards to qualifying the path, but 
couldn't find anything obvious. Is this a known bug?

!image-2020-07-03-16-08-15-852.png!


> whitespace not allowed in paths when saving files to s3a via committer
> --
>
> Key: HADOOP-17112
> URL: https://issues.apache.org/jira/browse/HADOOP-17112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Krzysztof Adamski
>Priority: Major
> Attachments: image-2020-07-03-16-08-52-340.png
>
>
> When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
> against hadoop-3.2 with the following specs
>  --conf 
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
>  
>  --conf 
> spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
>  
>  --conf 
> spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
>  
>  --conf spark.hadoop.fs.s3a.committer.name=partitioned 
>  --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
>  we are unable to save the file with whitespace character in the path. It 
> works fine without.
> I was looking into the recent commits with regards to qualifying the path, 
> but couldn't find anything obvious. Is this a known bug?
> When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
> against hadoop-3.2 with the following specs
> --conf 
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
>   
> --conf 
> spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
>  
> --conf 
> spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
>  
> --conf spark.hadoop.fs.s3a.committer.name=partitioned 
> --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
> we are unable to save the file with whitespace character in the path. It 
> works fine without.
> I was looking into the recent commits with regards to qualifying the path, 
> but couldn't find anything obvious. Is this a known bug?
> !image-2020-07-03-16-08-52-340

[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer

2020-07-03 Thread Krzysztof Adamski (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krzysztof Adamski updated HADOOP-17112:
---
Attachment: image-2020-07-03-16-08-52-340.png

> whitespace not allowed in paths when saving files to s3a via committer
> --
>
> Key: HADOOP-17112
> URL: https://issues.apache.org/jira/browse/HADOOP-17112
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Krzysztof Adamski
>Priority: Major
> Attachments: image-2020-07-03-16-08-52-340.png
>
>
> When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
> against hadoop-3.2 with the following specs
> --conf 
> spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
>   
> --conf 
> spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
>  
> --conf 
> spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
>  
> --conf spark.hadoop.fs.s3a.committer.name=partitioned 
> --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
> we are unable to save the file with whitespace character in the path. It 
> works fine without.
> I was looking into the recent commits with regards to qualifying the path, 
> but couldn't find anything obvious. Is this a known bug?
> !image-2020-07-03-16-08-15-852.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-07-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151100#comment-17151100
 ] 

Ayush Saxena commented on HADOOP-17102:
---

Adding a rule preventing imports seems a big decision, may be you should notify 
at the dev mailing list as well, if no one has concerns putting such 
restrictions.

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17102.001.patch, HADOOP-17102.002.patch
>
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-07-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151077#comment-17151077
 ] 

Hudson commented on HADOOP-17086:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18404 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18404/])
HADOOP-17086. ABFS: Making the ListStatus response ignore unknown (github: rev 
e0cededfbd2f11919102f01f9bf3ce540ffd6e94)
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ListResultSchemaTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResultEntrySchema.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResultSchema.java


> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.3.1
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebf

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-03 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-653659556


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
30 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 14s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 24s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 55s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  19m 55s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 1965 unchanged - 1 
fixed = 1966 total (was 1966)  |
   | +1 :green_heart: |  compile  |  17m 15s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  17m 15s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1859 unchanged - 1 
fixed = 1860 total (was 1860)  |
   | -0 :warning: |  checkstyle  |   2m 55s |  root: The patch generated 14 new 
+ 200 unchanged - 23 fixed = 214 total (was 223)  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 11 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   3m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 29s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   1m 38s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 172m 12s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestLocalFileSystem |
   |   | hadoop.fs.statistics.TestDynamicIOStatistics |
   |   | hadoop.fs.s3a.commit.staging.TestStagingCommitter |
   |   | hadoop.fs.s3a.commit.staging.TestStagingDirectoryOutputCommitter |
   |   | hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale |
   |   | hadoop.fs.s3a.commit.staging.TestStagingPartitio

[jira] [Resolved] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-07-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17086.
-
Fix Version/s: (was: 3.4.0)
   3.3.1
   Resolution: Fixed

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.3.1
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyExc

[GitHub] [hadoop] steveloughran commented on a change in pull request #2089: HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart

2020-07-03 Thread GitBox


steveloughran commented on a change in pull request #2089:
URL: https://github.com/apache/hadoop/pull/2089#discussion_r449674051



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
##
@@ -639,4 +639,23 @@ public Boolean get() {
   private static String getPluginUrlsAsString() {
 return "file:metrics2-test-plugin.jar";
   }
+
+  @Test
+  public void testMetricSystemRestart() {
+MetricsSystemImpl ms = new MetricsSystemImpl("msRestartTestSystem");
+TestSink ts = new TestSink();
+String sinkName = "restartTestSink";
+
+try {
+  ms.start();
+  ms.register(sinkName, "", ts);
+  assertNotNull("an adapter should exist for each sink", 
ms.getSinkAdapter(sinkName));

Review comment:
   we're generally still 80 chars wide, I'm afraid...put the actual probe 
on a new line.
   nice to see some text for the assertion tnough -appreciated





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2101: HADOOP-17086. ABFS: Making the ListStatus response model to ignore un…

2020-07-03 Thread GitBox


steveloughran commented on pull request #2101:
URL: https://github.com/apache/hadoop/pull/2101#issuecomment-653638487


   +1
   merged to trunk and branch-3.3



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2101: HADOOP-17086. ABFS: Making the ListStatus response model to ignore un…

2020-07-03 Thread GitBox


steveloughran merged pull request #2101:
URL: https://github.com/apache/hadoop/pull/2101


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-07-03 Thread GitBox


steveloughran commented on a change in pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#discussion_r449671131



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -3247,7 +3249,15 @@ private void handleInclude() throws XMLStreamException, 
IOException {
   File href = new File(confInclude);
   if (!href.isAbsolute()) {
 // Included resources are relative to the current resource
-File baseFile = new File(name).getParentFile();
+File baseFile;
+
+try {
+  baseFile = new File(new URI(name));
+} catch (IllegalArgumentException | URISyntaxException e) {
+  baseFile = new File(name);
+}
+
+baseFile = baseFile.getParentFile();
 href = new File(baseFile, href.getPath());

Review comment:
   I was worried here about what if baseFile = null at this point, but 
java.io.File is happy with that.
   

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
##
@@ -1062,6 +1062,38 @@ public void testRelativeIncludes() throws Exception {
 new File(new File(relConfig).getParent()).delete();
   }
 
+  @Test
+  public void testRelativeIncludesWithLoadingViaUri() throws Exception {
+tearDown();
+File configFile = new File("./tmp/test-config.xml");
+File configFile2 = new File("./tmp/test-config2.xml");
+
+new File(configFile.getParent()).mkdirs();
+out = new BufferedWriter(new FileWriter(configFile2));
+startConfig();
+appendProperty("a", "b");
+endConfig();
+
+out = new BufferedWriter(new FileWriter(configFile));
+startConfig();
+// Add the relative path instead of the absolute one.
+startInclude(configFile2.getName());
+endInclude();
+appendProperty("c", "d");
+endConfig();
+
+// verify that the includes file contains all properties
+Path fileResource = new Path(configFile.toURI());
+conf.addResource(fileResource);
+assertEquals(conf.get("a"), "b");

Review comment:
   params are the wrong way round. not your fault, but there's no reason to 
replicate the existing issue.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-03 Thread GitBox


steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-653619415


   latest patch wires up stats collection from the workers on an s3a committer 
job, marshalls them as json in .pending/.pendingset files and then finally 
aggregates them into the _SUCCESS job summary file. Here's an example of a test 
run.
   
   ```json
   2020-07-03 16:47:08,981 
[JUnit-ITestMagicCommitProtocol-testOutputFormatIntegration] INFO  
commit.AbstractCommitITest (AbstractCommitITest.java:loadSuccessFile(503)) - 
Loading committer success file 
s3a://stevel-ireland/test/ITestMagicCommitProtocol-testOutputFormatIntegration/_SUCCESS.
 Actual contents=
   {
 "name" : "org.apache.hadoop.fs.s3a.commit.files.SuccessData/1",
 "timestamp" : 1593791227415,
 "date" : "Fri Jul 03 16:47:07 BST 2020",
 "hostname" : "stevel-mbp15-13176.local",
 "committer" : "magic",
 "description" : "Task committer attempt_200707120821_0001_m_00_0",
   ...
 "diagnostics" : {
   "fs.s3a.authoritative.path" : "",
   "fs.s3a.metadatastore.impl" : 
"org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore",
   "fs.s3a.committer.magic.enabled" : "true",
   "fs.s3a.metadatastore.authoritative" : "false"
 },
 "filenames" : [ 
"/test/ITestMagicCommitProtocol-testOutputFormatIntegration/part-m-0" ],
 "iostatistics" : {
   "counters" : {
 "committer_bytes_committed" : 4,
 "committer_bytes_uploaded" : 0,
 "committer_commits_aborted" : 0,
 "committer_commits_completed" : 1,
 "committer_commits_created" : 0,
 "committer_commits_failed" : 0,
 "committer_commits_reverted" : 0,
 "committer_jobs_completed" : 1,
 "committer_jobs_failed" : 0,
 "committer_tasks_completed" : 1,
 "committer_tasks_failed" : 0,
 "stream_write_block_uploads" : 1,
 "stream_write_block_uploads_data_pending" : 0,
 "stream_write_bytes" : 4,
 "stream_write_exceptions" : 0,
 "stream_write_exceptions_completing_uploads" : 0,
 "stream_write_queue_duration" : 0,
 "stream_write_total_data" : 4,
 "stream_write_total_time" : 0
   },
   "gauges" : {
 "stream_write_block_uploads_data_pending" : 4,
 "stream_write_block_uploads_pending" : 0,
   },
   "minimums" : { },
   "maximums" : { },
   "meanStatistics" : { }
 }
   }
   
   ```
   
   I'm in a good mood here. Time for others to look at.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-653229688


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
25 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 32s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 20s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  21m 15s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 21s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 54s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 42s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 51s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  21m 51s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 1965 unchanged - 1 
fixed = 1966 total (was 1966)  |
   | +1 :green_heart: |  compile  |  16m 30s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  16m 30s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1859 unchanged - 1 
fixed = 1860 total (was 1860)  |
   | -0 :warning: |  checkstyle  |   2m 46s |  root: The patch generated 38 new 
+ 190 unchanged - 22 fixed = 228 total (was 212)  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 11 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   1m  4s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09
 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 101 
unchanged - 0 fixed = 102 total (was 101)  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   3m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 19s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 30s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestLocalFileSystem |
   |   | hadoop.fs.statistics.TestDynamicIOStatistics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.a

[GitHub] [hadoop] mehakmeet commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-07-03 Thread GitBox


mehakmeet commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-653505328


   HADOOP-16852(#1898)(causing the conflict) and HADOOP-17065(#2056) also, 
needs to go in.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-07-03 Thread GitBox


steveloughran commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-653483218


   I tried to cp into branch-3 and it didn't take. Is there some previous patch 
we need to merge in? I'd like both to stay 100% in sync here, so if there's 
something we need to pull in first, I'll do that



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-07-03 Thread GitBox


steveloughran commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-653482054


   LGTM.
   +1 from me...merging to trunk



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-07-03 Thread GitBox


steveloughran merged pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-07-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-647375271


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 55s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2076 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d26122b15123 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce1008fe61a |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/2/testReport/ |
   | Max. process+thread count | 448 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2076/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message

[GitHub] [hadoop] steveloughran commented on pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-07-03 Thread GitBox


steveloughran commented on pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113#issuecomment-653479534


   oh, and +1 pending that s3 endpoint declaration. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-07-03 Thread GitBox


steveloughran commented on pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113#issuecomment-653479139


   patch LGTM. Which endpoint (e.g us-west-2) and what build CLI options did 
you use? 
   
   we don't need that much detail, though if tests are failing that's good to 
call out so you can get some assistance debugging. e.g 
   
   https://github.com/apache/hadoop/pull/2076#issuecomment-649564035



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-07-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113#issuecomment-652080912


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 11 unchanged - 0 fixed = 15 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  72m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2113 |
   | JIRA Issue | HADOOP-17105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc2062ad8935 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8dc862d385 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/testReport/ |
   | Max. process+thread count | 459 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2113/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 ht

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-650416134







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org