[GitHub] [hadoop] jojochuang commented on pull request #1747: HDFS-15042 Add more tests for ByteBufferPositionedReadable.

2020-04-30 Thread GitBox


jojochuang commented on pull request #1747:
URL: https://github.com/apache/hadoop/pull/1747#issuecomment-622233978


   @sahilTakiar fyi



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17020) RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-04-30 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-17020:
--
Attachment: HADOOP-17020.1.patch

> RawFileSystem could localize default block size to avoid sync bottleneck in 
> config
> --
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-17020.1.patch, Screenshot 2020-04-29 at 5.24.53 
> PM.png, Screenshot 2020-05-01 at 7.12.06 AM.png
>
>
> RawLocalFileSystem could localize default block size to avoid sync bottleneck 
> with Configuration object. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17020) RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-04-30 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097127#comment-17097127
 ] 

Rajesh Balamohan commented on HADOOP-17020:
---

Also found similar kind of issue in mkdirs.  !Screenshot 2020-05-01 at 7.12.06 
AM.png! 

> RawFileSystem could localize default block size to avoid sync bottleneck in 
> config
> --
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: Screenshot 2020-04-29 at 5.24.53 PM.png, Screenshot 
> 2020-05-01 at 7.12.06 AM.png
>
>
> RawLocalFileSystem could localize default block size to avoid sync bottleneck 
> with Configuration object. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17020) RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-04-30 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097127#comment-17097127
 ] 

Rajesh Balamohan edited comment on HADOOP-17020 at 5/1/20, 2:17 AM:


Also found similar kind of issue in mkdirs.  !Screenshot 2020-05-01 at 7.12.06 
AM.png|width=481,height=178!


was (Author: rajesh.balamohan):
Also found similar kind of issue in mkdirs.  !Screenshot 2020-05-01 at 7.12.06 
AM.png! 

> RawFileSystem could localize default block size to avoid sync bottleneck in 
> config
> --
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: Screenshot 2020-04-29 at 5.24.53 PM.png, Screenshot 
> 2020-05-01 at 7.12.06 AM.png
>
>
> RawLocalFileSystem could localize default block size to avoid sync bottleneck 
> with Configuration object. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17020) RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-04-30 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-17020:
--
Attachment: Screenshot 2020-05-01 at 7.12.06 AM.png

> RawFileSystem could localize default block size to avoid sync bottleneck in 
> config
> --
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: Screenshot 2020-04-29 at 5.24.53 PM.png, Screenshot 
> 2020-05-01 at 7.12.06 AM.png
>
>
> RawLocalFileSystem could localize default block size to avoid sync bottleneck 
> with Configuration object. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajith commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

2020-04-30 Thread GitBox


virajith commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418375437



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
##
@@ -42,4 +42,11 @@
*/
   public static final URI VIEWFS_URI = URI.create("viewfs:///");
   public static final String VIEWFS_SCHEME = "viewfs";
+
+  public static final String VIEWFS_OVERLOAD_SCHEME_KEY =
+  "fs.viewfs.overload.scheme";
+  public static final String VIEWFS_OVERLOAD_SCHEME_DEFAULT = "hdfs";
+  public static final String 
FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY =
+  "fs.viewfs.overload.scheme.target.%s.impl";
+  public static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";

Review comment:
   Why add this here? This is just used in tests right?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/**
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple

Review comment:
   nits:
   "object is" -> "objective here is to handle"
   "a multiple" -> multiple.
   

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -302,6 +320,11 @@ protected FileSystem getTargetFileSystem(final String 
settings,
 }
   }
 
+  protected void superFSInit(final URI theUri, final Configuration conf)

Review comment:
   javadoc for this method? This seems a bit hacky but I understand the 
need. I think initializeSuperFs is a slightly  better name but don't have a 
strong opinion here.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/**
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transpa

[jira] [Updated] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-04-30 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HADOOP-17024:
-
Description: 
As part of the design doc HDFS-15289, [~sanjay.radia] and discussed the 
following scenarios when fallback enabled.

*Behavior when fallback enabled:*

   Assume FS trees and mount mappings like below:

   mount link /a/b/c/d  → hdfs://nn1/a/b

   mount link /a/p/q/r  → hdfs://nn2/a/b

   fallback → hdfs://nn3/  $  /a/c
                                                 /x/z
 # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
 # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
 # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
Because it conflicts with the open(/x/y)
 # Create /y  : fails  - also fails when not using  fallback  - WORKS
 # Create /a/z : fails - also fails when not using  fallback - WORKS
 # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
WORKS

 

This Jira will fix issue of #3. So, when fallback enabled it should show merged 
ls view with mount links + fallback root. ( this will only be at root level)

  was:
As part of the design doc, [~sanjay.radia] and discussed the following 
scenarios when fallback enabled.

 *Behavior when fallback enabled:*

   Assume FS trees and mount mappings like below:

   mount link /a/b/c/d  → hdfs://nn1/a/b

   mount link /a/p/q/r  → hdfs://nn2/a/b

   fallback → hdfs://nn3/  $  /a/c
                                                 /x/z
 # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
 # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
 # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
Because it conflicts with the open(/x/y)
 # Create /y  : fails  - also fails when not using  fallback  - WORKS
 # Create /a/z : fails - also fails when not using  fallback - WORKS
 # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
WORKS

 

This Jira will fix issue of #3. So, when fallback enabled it should show merged 
ls view with mount links + fallback root. ( this will only be at root level)


> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-04-30 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HADOOP-17024:
-
Description: 
As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
following scenarios when fallback enabled.

*Behavior when fallback enabled:*

   Assume FS trees and mount mappings like below:

   mount link /a/b/c/d  → hdfs://nn1/a/b

   mount link /a/p/q/r  → hdfs://nn2/a/b

   fallback → hdfs://nn3/  $  /a/c
                                                 /x/z
 # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
 # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
 # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
Because it conflicts with the open(/x/y)
 # Create /y  : fails  - also fails when not using  fallback  - WORKS
 # Create /a/z : fails - also fails when not using  fallback - WORKS
 # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
WORKS

 

This Jira will fix issue of #3. So, when fallback enabled it should show merged 
ls view with mount links + fallback root. ( this will only be at root level)

  was:
As part of the design doc HDFS-15289, [~sanjay.radia] and discussed the 
following scenarios when fallback enabled.

*Behavior when fallback enabled:*

   Assume FS trees and mount mappings like below:

   mount link /a/b/c/d  → hdfs://nn1/a/b

   mount link /a/p/q/r  → hdfs://nn2/a/b

   fallback → hdfs://nn3/  $  /a/c
                                                 /x/z
 # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
 # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
 # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
Because it conflicts with the open(/x/y)
 # Create /y  : fails  - also fails when not using  fallback  - WORKS
 # Create /a/z : fails - also fails when not using  fallback - WORKS
 # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
WORKS

 

This Jira will fix issue of #3. So, when fallback enabled it should show merged 
ls view with mount links + fallback root. ( this will only be at root level)


> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-04-30 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HADOOP-17024:


 Summary: ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root).
 Key: HADOOP-17024
 URL: https://issues.apache.org/jira/browse/HADOOP-17024
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 3.2.2
Reporter: Uma Maheswara Rao G


As part of the design doc, [~sanjay.radia] and discussed the following 
scenarios when fallback enabled.

 *Behavior when fallback enabled:*

   Assume FS trees and mount mappings like below:

   mount link /a/b/c/d  → hdfs://nn1/a/b

   mount link /a/p/q/r  → hdfs://nn2/a/b

   fallback → hdfs://nn3/  $  /a/c
                                                 /x/z
 # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
 # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
 # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
Because it conflicts with the open(/x/y)
 # Create /y  : fails  - also fails when not using  fallback  - WORKS
 # Create /a/z : fails - also fails when not using  fallback - WORKS
 # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
WORKS

 

This Jira will fix issue of #3. So, when fallback enabled it should show merged 
ls view with mount links + fallback root. ( this will only be at root level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor

2020-04-30 Thread GitBox


jojochuang commented on pull request #1951:
URL: https://github.com/apache/hadoop/pull/1951#issuecomment-622191486


   The cc warning looks unrelated. I filed a jira to fix that up: 
https://issues.apache.org/jira/browse/HDFS-15317
   
   I don't think we support openj9, but this is a good improvement anyway. I 
chatted with @sahilTakiar offline and he thinks this is good.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] babsingh commented on pull request #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor

2020-04-30 Thread GitBox


babsingh commented on pull request #1951:
URL: https://github.com/apache/hadoop/pull/1951#issuecomment-622190741


   - **cc -1 ❌** ... Compilation warnings are unrelated to the proposed change.
   
   - **test4tests -1 ❌** ... Existing tests should already include coverage for 
the reported failure. OpenJ9 JVM needs to be used in order to reproduce the 
failure. Instructions to reproduce the failure: 
https://github.com/eclipse/openj9/issues/7752#issue-521732953. Manual steps 
taken to verify the patch: 
https://github.com/eclipse/openj9/issues/7752#issuecomment-612149993.
   
   @jojochuang Is the above justification sufficient to address the **💔 -1 
overall**? Is there a specific protocol to justify **-1 ❌** for each subsystem?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS

2020-04-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097003#comment-17097003
 ] 

Hudson commented on HADOOP-17011:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18204 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18204/])
HADOOP-17011. Tolerate leading and trailing spaces in fs.defaultFS. (liuml07: 
rev 263c76b678275dfff867415c71ba9dc00a9235ef)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsCommand.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* (edit) 
hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/ClusterSummarizer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeUtils.java


> Tolerate leading and trailing spaces in fs.defaultFS
> 
>
> Key: HADOOP-17011
> URL: https://issues.apache.org/jira/browse/HADOOP-17011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch, 
> HADOOP-17011-003.patch, HADOOP-17011-004.patch, HADOOP-17011-005.patch
>
>
> *Problem:*
> Currently, `getDefaultUri` is using `conf.get` to get the value of 
> `fs.defaultFS`, which means that the trailing whitespace after a valid URI 
> won’t be removed and could stop namenode and datanode from starting up.
>  
> *How to reproduce (Hadoop-2.8.5):*
> Set the configuration
> {code:java}
> 
>  fs.defaultFS
>  hdfs://localhost:9000 
> {code}
> In core-site.xml (there is a whitespace after 9000) and start HDFS.
> Namenode and datanode won’t start and the log message is:
> {code:java}
> 2020-04-23 11:09:48,198 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.lang.IllegalArgumentException: Illegal character in authority at index 
> 7: hdfs://localhost:9000 
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
> Caused by: java.net.URISyntaxException: Illegal character in authority at 
> index 7: hdfs://localhost:9000 
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.parseAuthority(URI.java:3186)
> at java.net.URI$Parser.parseHierarchical(URI.java:3097)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.(URI.java:588)
> at java.net.URI.create(URI.java:850)
> ... 5 more
> {code}
>  
> *Solution:*
> Use `getTrimmed` instead of `get` for `fs.defaultFS`:
> {code:java}
> public static URI getDefaultUri(Configuration conf) {
>   URI uri =
> URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS)));
>   if (uri.getScheme() == null) {
> throw new IllegalArgumentException("No scheme in default FS: " + uri);
>   }
>   return uri;
> }
> {code}
> I have submitted a patch for trunk about this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS

2020-04-30 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-17011:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

+1

# The failing test {{TestSnappyCompressorDecompressor}} an 
d{{TestCompressorDecompressor}} are tracked by HADOOP-16768
# The failing test {{TestBalancer}} is tracked by HDFS-13975
# Findbug etc are not related to this patch.

Committed to {{trunk}} branch. Thanks for your contribution [~ctest.team]. 
Thanks for reviewing [~ayushtkn]

> Tolerate leading and trailing spaces in fs.defaultFS
> 
>
> Key: HADOOP-17011
> URL: https://issues.apache.org/jira/browse/HADOOP-17011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch, 
> HADOOP-17011-003.patch, HADOOP-17011-004.patch, HADOOP-17011-005.patch
>
>
> *Problem:*
> Currently, `getDefaultUri` is using `conf.get` to get the value of 
> `fs.defaultFS`, which means that the trailing whitespace after a valid URI 
> won’t be removed and could stop namenode and datanode from starting up.
>  
> *How to reproduce (Hadoop-2.8.5):*
> Set the configuration
> {code:java}
> 
>  fs.defaultFS
>  hdfs://localhost:9000 
> {code}
> In core-site.xml (there is a whitespace after 9000) and start HDFS.
> Namenode and datanode won’t start and the log message is:
> {code:java}
> 2020-04-23 11:09:48,198 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.lang.IllegalArgumentException: Illegal character in authority at index 
> 7: hdfs://localhost:9000 
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
> Caused by: java.net.URISyntaxException: Illegal character in authority at 
> index 7: hdfs://localhost:9000 
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.parseAuthority(URI.java:3186)
> at java.net.URI$Parser.parseHierarchical(URI.java:3097)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.(URI.java:588)
> at java.net.URI.create(URI.java:850)
> ... 5 more
> {code}
>  
> *Solution:*
> Use `getTrimmed` instead of `get` for `fs.defaultFS`:
> {code:java}
> public static URI getDefaultUri(Configuration conf) {
>   URI uri =
> URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS)));
>   if (uri.getScheme() == null) {
> throw new IllegalArgumentException("No scheme in default FS: " + uri);
>   }
>   return uri;
> }
> {code}
> I have submitted a patch for trunk about this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

2020-04-30 Thread GitBox


umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622103167


   Just a note: I have not verified nfly mount links in this patch. I will file 
another JIRA for nfly changes and tests.
   Nfly using FileSystem.get, so I should make nfly also use FsCreator to 
handle looping.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #1986: HADOOP-17019. Declare ProtobufHelper a public API

2020-04-30 Thread GitBox


jojochuang commented on pull request #1986:
URL: https://github.com/apache/hadoop/pull/1986#issuecomment-62204


   Per discussion in the corresponding jira, closing the PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17019) Declare ProtobufHelper a public API

2020-04-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17019.
--
Resolution: Not A Problem

Fair enough. I'll close this one. Thanks [~ste...@apache.org]!

> Declare ProtobufHelper a public API
> ---
>
> Key: HADOOP-17019
> URL: https://issues.apache.org/jira/browse/HADOOP-17019
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> HADOOP-16621 removed two public API methods:
> 1. o.a.h.security.token.Token(TokenProto tokenPB) --> replaced by 
> o.a.h.security.ipc.ProtobufHelper.tokenFromProto()
> 2. o.a.h.security.token.Token.toTokenProto() --> replaced by 
> o.a.h.security.ipc.ProtobufHelper.protoFromToken()
> Protobuf is declared private. Should we make it public now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support

2020-04-30 Thread GitBox


jojochuang commented on pull request #1967:
URL: https://github.com/apache/hadoop/pull/1967#issuecomment-622084796


   Triggering a rebuild: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/9/console



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


hadoop-yetus commented on pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991#issuecomment-622036011


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 14s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1991 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b695a7c7a2d1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6bdab3723ef |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/2/testReport/ |
   | Max. process+thread count | 343 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


hadoop-yetus commented on pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991#issuecomment-622013579


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  22m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  79m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1991 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 70da264622db 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6bdab3723ef |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/1/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1991/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


mehakmeet commented on a change in pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991#discussion_r418149950



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Tests AzureBlobFileSystem Statistics.
+ */
+public class ITestAbfsStatistics extends AbstractAbfsIntegrationTest {
+
+  private static final int NUMBER_OF_OPS = 10;
+
+  public ITestAbfsStatistics() throws Exception {
+  }
+
+  /**
+   * Testing statistics by creating files and directories.
+   */
+  @Test
+  public void testCreateStatistics() throws IOException {
+describe("Testing counter values got by creating directories and files in"
++ " Abfs");
+
+AzureBlobFileSystem fs = getFileSystem();
+Path createFilePath = path(getMethodName());
+Path createDirectoryPath = path(getMethodName() + "Dir");
+
+Map metricMap = fs.getInstrumentation().toMap();
+
+/*
+ Test for initial values of create statistics ; getFileStatus is called
+ 1 time after Abfs initialisation.
+ */
+assertEquals("Mismatch in op_create", 0,
+(long) metricMap.get("op_create"));
+assertEquals("Mismatch in op_create_non_recursive", 0,
+(long) metricMap.get("op_create_non_recursive"));
+assertEquals("Mismatch in files_created", 0,
+(long) metricMap.get("files_created"));
+assertEquals("Mismatch in directories_created", 0,
+(long) metricMap.get("directories_created"));
+assertEquals("Mismatch in op_mkdirs", 0,
+(long) metricMap.get("op_mkdirs"));
+assertEquals("Mismatch in op_get_file_status", 1,
+(long) metricMap.get("op_get_file_status"));
+
+try {
+
+  fs.mkdirs(createDirectoryPath);
+  fs.createNonRecursive(createFilePath, FsPermission
+  .getDefault(), false, 1024, (short) 1, 1024, null);
+
+  metricMap = fs.getInstrumentation().toMap();
+  /*
+   Test of statistic values after creating a directory and a file ;
+   getFileStatus is called 1 time after creating file and 1 time at
+   time of initialising.
+   */
+  assertEquals("Mismatch in op_create", 1,
+  (long) metricMap.get("op_create"));
+  assertEquals("Mismatch in op_create_non_recursive", 1,
+  (long) metricMap.get("op_create_non_recursive"));
+  assertEquals("Mismatch in files_created", 1,
+  (long) metricMap.get("files_created"));
+  assertEquals("Mismatch in directories_created", 1,
+  (long) metricMap.get("directories_created"));
+  assertEquals("Mismatch in op_mkdirs", 1,
+  (long) metricMap.get("op_mkdirs"));
+  assertEquals("Mismatch in op_get_file_status", 2,
+  (long) metricMap.get("op_get_file_status"));
+
+} finally {
+  fs.close();

Review comment:
   Realized after committing that teardown is already doing this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


mehakmeet commented on a change in pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991#discussion_r418142478



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -926,11 +962,23 @@ public void access(final Path path, final FsAction mode) 
throws IOException {
 }
   }
 
+  /**
+   * Incrementing exists() calls from superclass for Statistic collection.
+   * @param f source path
+   * @return true if the path exists

Review comment:
   '.' in the end.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a change in pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


mehakmeet commented on a change in pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991#discussion_r418142248



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Tests AzureBlobFileSystem Statistics.
+ */
+public class ITestAbfsStatistics extends AbstractAbfsIntegrationTest {
+
+  private static final int NUMBER_OF_OPS = 10;
+
+  public ITestAbfsStatistics() throws Exception {
+  }
+
+  /**
+   * Testing statistics by creating files and directories.
+   */
+  @Test
+  public void testCreateStatistics() throws IOException {
+describe("Testing counter values got by creating directories and files in"
++ " Abfs");
+
+AzureBlobFileSystem fs = getFileSystem();
+Path createFilePath = path(getMethodName());
+Path createDirectoryPath = path(getMethodName() + "Dir");
+
+Map metricMap = fs.getInstrumentation().toMap();
+
+/*
+ Test for initial values of create statistics ; getFileStatus is called
+ 1 time after Abfs initialisation.
+ */
+assertEquals("Mismatch in op_create", 0,
+(long) metricMap.get("op_create"));
+assertEquals("Mismatch in op_create_non_recursive", 0,
+(long) metricMap.get("op_create_non_recursive"));
+assertEquals("Mismatch in files_created", 0,
+(long) metricMap.get("files_created"));
+assertEquals("Mismatch in directories_created", 0,
+(long) metricMap.get("directories_created"));
+assertEquals("Mismatch in op_mkdirs", 0,
+(long) metricMap.get("op_mkdirs"));
+assertEquals("Mismatch in op_get_file_status", 1,
+(long) metricMap.get("op_get_file_status"));
+
+try {
+
+  fs.mkdirs(createDirectoryPath);
+  fs.createNonRecursive(createFilePath, FsPermission
+  .getDefault(), false, 1024, (short) 1, 1024, null);
+
+  metricMap = fs.getInstrumentation().toMap();
+  /*
+   Test of statistic values after creating a directory and a file ;
+   getFileStatus is called 1 time after creating file and 1 time at
+   time of initialising.
+   */
+  assertEquals("Mismatch in op_create", 1,
+  (long) metricMap.get("op_create"));
+  assertEquals("Mismatch in op_create_non_recursive", 1,
+  (long) metricMap.get("op_create_non_recursive"));
+  assertEquals("Mismatch in files_created", 1,
+  (long) metricMap.get("files_created"));
+  assertEquals("Mismatch in directories_created", 1,
+  (long) metricMap.get("directories_created"));
+  assertEquals("Mismatch in op_mkdirs", 1,
+  (long) metricMap.get("op_mkdirs"));
+  assertEquals("Mismatch in op_get_file_status", 2,
+  (long) metricMap.get("op_get_file_status"));
+
+} finally {
+  fs.close();
+}
+
+//re-initialising Abfs to reset statistic values.
+fs.initialize(fs.getUri(), fs.getConf());
+
+try {
+  /*Creating 10 directories and files; Directories and files can't
+   be created with same name, hence  + i to give unique names.
+   */
+  for (int i = 0; i < NUMBER_OF_OPS; i++) {
+fs.mkdirs(path(getMethodName() + "Dir" + i));
+fs.createNonRecursive(path(getMethodName() + i),
+FsPermission.getDefault(), false, 1024, (short) 1,
+1024, null);
+  }
+
+  metricMap = fs.getInstrumentation().toMap();
+  /*
+   Test of statistics values after creating 10 directories and files;
+   getFileStatus is called 1 time at initialise() plus number of
+   times file is created.
+   */
+  assertEquals("Mismatch in op_create", NUMBER_OF_OPS,
+  (long) metricMap.get("op_create"));
+  assertEquals("Mismatch in op_create_non_recursive", NUMBER_OF_OPS,
+  (long) metricMap.get("op_create_non_recursive"));
+  assertEquals("Mismatch in files_created", NUMBER_OF_OPS,
+  (long) metricMap.get

[GitHub] [hadoop] mehakmeet opened a new pull request #1991: HADOOP-17016. Adding Common Counters in ABFS

2020-04-30 Thread GitBox


mehakmeet opened a new pull request #1991:
URL: https://github.com/apache/hadoop/pull/1991


   Common Counters to be Added in ABFS.
   
   test run: mvn -T 1C -Dparallel-tests=abfs clean verify
   Region: East US, West US
   
   UT:
   ```
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.016 
s - in org.apache.hadoop.fs.azurebfs.TestAbfsStatistics
   ```
   IT:
   ```
   [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsStatistics
   [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
68.642 s - in org.apache.hadoop.fs.azurebfs.ITestAbfsStatistics
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amarnathkarthik commented on pull request #1918: HADOOP-16910: ABFS Streams to update FileSystem.Statistics counters on IO.

2020-04-30 Thread GitBox


amarnathkarthik commented on pull request #1918:
URL: https://github.com/apache/hadoop/pull/1918#issuecomment-621948519


   > @amarnathkarthik Can you tell me if you've changed any specific 
configurations and then run the tests or maybe share what you are using in your 
auth-keys.xml to run the tests?
   > Just want to pin-point why all your tests are failing because they are not 
failing for me.
   
   @mehakmeet There is no custom configuration change on storage side, i'd run 
with both with hierarchical namespace enabled and disabled. Let me know if you 
need further details.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16957) NodeBase.normalize doesn't removing all trailing slashes.

2020-04-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096604#comment-17096604
 ] 

Hudson commented on HADOOP-16957:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18203 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18203/])
HADOOP-16957. NodeBase.normalize doesn't removing all trailing slashes. 
(ayushsaxena: rev 6bdab3723eff78c79aa48c24aad87373b983fe6c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestClusterTopology.java


> NodeBase.normalize doesn't removing all trailing slashes.
> -
>
> Key: HADOOP-16957
> URL: https://issues.apache.org/jira/browse/HADOOP-16957
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16957-01.patch
>
>
> As per javadoc 
> /** Normalize a path by stripping off any trailing {@link #PATH_SEPARATOR}
> But it removes only one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1990: HADOOP-17018. Intermittent failing of ITestAbfsStreamStatistics in ABFS

2020-04-30 Thread GitBox


hadoop-yetus commented on pull request #1990:
URL: https://github.com/apache/hadoop/pull/1990#issuecomment-621894812


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  3s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 51s |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  85m 54s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestClientThrottlingAnalyzer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1990/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1990 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 51d371e18d22 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5b45c53a4e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1990/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1990/1/testReport/ |
   | Max. process+thread count | 366 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1990/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16957) NodeBase.normalize doesn't removing all trailing slashes.

2020-04-30 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16957:
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> NodeBase.normalize doesn't removing all trailing slashes.
> -
>
> Key: HADOOP-16957
> URL: https://issues.apache.org/jira/browse/HADOOP-16957
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16957-01.patch
>
>
> As per javadoc 
> /** Normalize a path by stripping off any trailing {@link #PATH_SEPARATOR}
> But it removes only one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16957) NodeBase.normalize doesn't removing all trailing slashes.

2020-04-30 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096586#comment-17096586
 ] 

Ayush Saxena commented on HADOOP-16957:
---

Committed to trunk.
Thanx [~vinayakumarb] for the review!!!

> NodeBase.normalize doesn't removing all trailing slashes.
> -
>
> Key: HADOOP-16957
> URL: https://issues.apache.org/jira/browse/HADOOP-16957
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16957-01.patch
>
>
> As per javadoc 
> /** Normalize a path by stripping off any trailing {@link #PATH_SEPARATOR}
> But it removes only one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1989: HDFS-15313. Ensure inodes in active filesytem are not deleted during snapshot delete

2020-04-30 Thread GitBox


hadoop-yetus commented on pull request #1989:
URL: https://github.com/apache/hadoop/pull/1989#issuecomment-621868969


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 43s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  3s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 3 new + 218 unchanged - 0 fixed = 221 total (was 218)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 107m  2s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 204m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.TestRefreshCallQueue |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1989/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1989 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e5b15b11f2f0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5b45c53a4e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1989/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1989/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1989/1/testReport/ |
   | Max. process+thread count | 2914 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1989/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17022) Tune listFiles() api.

2020-04-30 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096536#comment-17096536
 ] 

Mukund Thakur commented on HADOOP-17022:


Yes sure. I was also thinking of the same.

> Tune listFiles() api.
> -
>
> Key: HADOOP-17022
> URL: https://issues.apache.org/jira/browse/HADOOP-17022
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> Similar optimisation which was done for listLocatedSttaus() 
> https://issues.apache.org/jira/browse/HADOOP-16465  can done for listFiles() 
> api as well. 
> This is going to reduce the number of remote calls in case of directory 
> listing.
>  
> CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17020) RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-04-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096535#comment-17096535
 ] 

Steve Loughran commented on HADOOP-17020:
-

if its exists() calls that are the bottlenext, raw local could just implement 
that and isFile/isDir and skip the whole filestatus creation

> RawFileSystem could localize default block size to avoid sync bottleneck in 
> config
> --
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: Screenshot 2020-04-29 at 5.24.53 PM.png
>
>
> RawLocalFileSystem could localize default block size to avoid sync bottleneck 
> with Configuration object. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17021) Add concat fs command

2020-04-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096534#comment-17096534
 ] 

Hadoop QA commented on HADOOP-17021:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
12s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 6 new + 1 unchanged - 0 fixed = 7 total (was 1) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 36s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
50s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
|   | hadoop.fs.shell.TestFsShellConcat |
|   | hadoop.io.compress.TestCompressorDecompressor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16920/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17021 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001711/HADOOP-17021.001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 77bb413b8a14 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / b5b45c53a4e |
| Default Java | Private Build-1.8.0_

[jira] [Commented] (HADOOP-17016) Adding Common Counters in ABFS

2020-04-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096533#comment-17096533
 ] 

Steve Loughran commented on HADOOP-17016:
-

I'm going to put all of these as constants in the IOStatistics patch so they 
have common names

> Adding Common Counters in ABFS
> --
>
> Key: HADOOP-17016
> URL: https://issues.apache.org/jira/browse/HADOOP-17016
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> Common Counters to be added to ABFS:
> |OP_CREATE|
> |OP_OPEN|
> |OP_GET_FILE_STATUS|
> |OP_APPEND|
> |OP_CREATE_NON_RECURSIVE|
> |OP_DELETE|
> |OP_EXISTS|
> |OP_GET_DELEGATION_TOKEN|
> |OP_LIST_STATUS|
> |OP_MKDIRS|
> |OP_RENAME|
> |DIRECTORIES_CREATED|
> |DIRECTORIES_DELETED|
> |FILES_CREATED|
> |FILES_DELETED|
> |ERROR_IGNORED|
>  propose:
>  * Have an enum class to define all the counters.
>  * Have an Instrumentation class for making a MetricRegistry and adding all 
> the counters.
>  * Incrementing the counters in AzureBlobFileSystem.
>  * Integration and Unit tests to validate the counters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17019) Declare ProtobufHelper a public API

2020-04-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096529#comment-17096529
 ] 

Steve Loughran commented on HADOOP-17019:
-

Tez should switch to using writables and avoiding the helper, as that uses the 
shaded stuff. If all they want is an opaque token <--> byte[] marshall 
operation we could add that and backport, but if they use Writable in their own 
code it will work with all older releases already

> Declare ProtobufHelper a public API
> ---
>
> Key: HADOOP-17019
> URL: https://issues.apache.org/jira/browse/HADOOP-17019
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> HADOOP-16621 removed two public API methods:
> 1. o.a.h.security.token.Token(TokenProto tokenPB) --> replaced by 
> o.a.h.security.ipc.ProtobufHelper.tokenFromProto()
> 2. o.a.h.security.token.Token.toTokenProto() --> replaced by 
> o.a.h.security.ipc.ProtobufHelper.protoFromToken()
> Protobuf is declared private. Should we make it public now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17021) Add concat fs command

2020-04-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096524#comment-17096524
 ] 

Steve Loughran commented on HADOOP-17021:
-

makes sense. 
1. Can you do this as a github patch?
2. will need some docs, and probably good handling of the case "destFS doesn't 
support concat"

> Add concat fs command
> -
>
> Key: HADOOP-17021
> URL: https://issues.apache.org/jira/browse/HADOOP-17021
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HADOOP-17021.001.patch
>
>
> We should add one concat fs command for ease of use. It concatenates existing 
> source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17022) Tune listFiles() api.

2020-04-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096523#comment-17096523
 ] 

Steve Loughran commented on HADOOP-17022:
-

given this and HADOOP-17203 are so similar, you can merge them together for a 
shared dev/test/review process

> Tune listFiles() api.
> -
>
> Key: HADOOP-17022
> URL: https://issues.apache.org/jira/browse/HADOOP-17022
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> Similar optimisation which was done for listLocatedSttaus() 
> https://issues.apache.org/jira/browse/HADOOP-16465  can done for listFiles() 
> api as well. 
> This is going to reduce the number of remote calls in case of directory 
> listing.
>  
> CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #1918: HADOOP-16910: ABFS Streams to update FileSystem.Statistics counters on IO.

2020-04-30 Thread GitBox


mehakmeet commented on pull request #1918:
URL: https://github.com/apache/hadoop/pull/1918#issuecomment-621815771


   PR #1990 has been opened regarding this and contains the reasoning of 
intermittent test failures.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request #1990: HADOOP-17018. Intermittent failing of ITestAbfsStreamStatistics in ABFS

2020-04-30 Thread GitBox


mehakmeet opened a new pull request #1990:
URL: https://github.com/apache/hadoop/pull/1990


   In some cases, ABFS-prefetch thread runs in the background which returns 
some bytes from the buffer and gives an extra readOp. Thus, making readOps 
values arbitrary and giving intermittent failures in some cases. Hence, readOps 
values of 2 or 3 are seen in different setups.
   
   test runs: 
   - mvn -T 1C -Dparallel-tests=abfs clean verify
   - mvn -T 1C -Dparallel-tests=abfs clean verify -Dtest=none 
-Dit.test=ITestAbfsStreamStatistics
   
   Region: East US, West US
   
   ```
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
165.964 s - in org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
   [INFO]
   [INFO]
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(integration-test-abfs-parallel-classes) @ hadoop-azure ---
   [INFO]
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
256.797 s - in org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
   [INFO]
   [INFO]
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-azure 
---
   [INFO]
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:verify 
(integration-test-abfs-parallel-classesAndMethods) @ hadoop-azure ---
   [INFO]
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:verify 
(integration-test-abfs-parallel-classes) @ hadoop-azure ---
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  07:14 min (Wall Clock)
   [INFO] Finished at: 2020-04-30T18:01:26+05:30
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17023) Tune listStatus() api of s3a.

2020-04-30 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17023:
--

 Summary: Tune listStatus() api of s3a.
 Key: HADOOP-17023
 URL: https://issues.apache.org/jira/browse/HADOOP-17023
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.1
Reporter: Mukund Thakur
Assignee: Mukund Thakur


Similar optimisation which was done for listLocatedSttaus() 
https://issues.apache.org/jira/browse/HADOOP-16465  can done for listStatus() 
api as well. 

This is going to reduce the number of remote calls in case of directory listing.

 

CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17022) Tune listFiles() api.

2020-04-30 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17022:
--

 Summary: Tune listFiles() api.
 Key: HADOOP-17022
 URL: https://issues.apache.org/jira/browse/HADOOP-17022
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.1
Reporter: Mukund Thakur
Assignee: Mukund Thakur


Similar optimisation which was done for listLocatedSttaus() 
https://issues.apache.org/jira/browse/HADOOP-16465  can done for listFiles() 
api as well. 

This is going to reduce the number of remote calls in case of directory listing.

 

CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17021) Add concat fs command

2020-04-30 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-17021:
-
Attachment: HADOOP-17021.001.patch
Status: Patch Available  (was: Open)

> Add concat fs command
> -
>
> Key: HADOOP-17021
> URL: https://issues.apache.org/jira/browse/HADOOP-17021
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HADOOP-17021.001.patch
>
>
> We should add one concat fs command for ease of use. It concatenates existing 
> source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17021) Add concat fs command

2020-04-30 Thread Jinglun (Jira)
Jinglun created HADOOP-17021:


 Summary: Add concat fs command
 Key: HADOOP-17021
 URL: https://issues.apache.org/jira/browse/HADOOP-17021
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jinglun


We should add one concat fs command for ease of use. It concatenates existing 
source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17021) Add concat fs command

2020-04-30 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun reassigned HADOOP-17021:


Assignee: Jinglun

> Add concat fs command
> -
>
> Key: HADOOP-17021
> URL: https://issues.apache.org/jira/browse/HADOOP-17021
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
>
> We should add one concat fs command for ease of use. It concatenates existing 
> source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

2020-04-30 Thread GitBox


hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-621756333


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 18s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 10s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 18s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 41s |  root: The patch generated 13 new 
+ 130 unchanged - 1 fixed = 143 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 10 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 12s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  93m 11s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 220m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.TestRefreshCallQueue |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 35e0641a50f8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5b45c53a4e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/testReport/ |
   | Max. process+thread count | 5007 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For

[GitHub] [hadoop] bshashikant commented on pull request #1989: HDFS-15313. Ensure inodes in active filesytem are not deleted during snapshot delete

2020-04-30 Thread GitBox


bshashikant commented on pull request #1989:
URL: https://github.com/apache/hadoop/pull/1989#issuecomment-621752400


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #1989: HDFS-15313. Ensure inodes in active filesytem are not deleted during snapshot delete

2020-04-30 Thread GitBox


bshashikant opened a new pull request #1989:
URL: https://github.com/apache/hadoop/pull/1989


   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao opened a new pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

2020-04-30 Thread GitBox


umamaheswararao opened a new pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988


   https://issues.apache.org/jira/browse/HDFS-15305
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org