[jira] [Commented] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468341#comment-16468341
 ] 

Hudson commented on HDDS-33:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14146 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14146/])
HDDS-33. Ozone : Fix the test logic in (msingh: rev 
809135082a04208f586ba7fbce705668eb559007)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java


> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-33.01.patch, HDFS-13454-HDFS-7240.000.patch, 
> HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13428:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Remove LinkedList From StateStoreFileImpl.java
> ---
>
> Key: HDFS-13428
> URL: https://issues.apache.org/jira/browse/HDFS-13428
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13428.1.patch
>
>
> Replace {{LinkedList}} with {{ArrayList}} implementation in the 
> StateStoreFileImpl class.  This is especially advantageous because we can 
> pre-allocate the internal array before a copy occurs.  {{ArrayList}} is 
> faster for iterations and requires less memory than {{LinkedList}}.
> {code:java}
>   protected List getChildren(String path) {
> List ret = new LinkedList<>();
> File dir = new File(path);
> File[] files = dir.listFiles();
> if (files != null) {
>   for (File file : files) {
> String filename = file.getName();
> ret.add(filename);
>   }
> }
> return ret;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13508) RBF: Normalize paths (automatically) when adding, updating, removing or listing mount table entries

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13508:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Normalize paths (automatically) when adding, updating, removing or 
> listing mount table entries
> ---
>
> Key: HDFS-13508
> URL: https://issues.apache.org/jira/browse/HDFS-13508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ekanth S
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13508.001.patch, HDFS-13508.002.patch, 
> HDFS-13508.003.patch
>
>
> me@gateway-hawaii-all:/mnt/host/bin$ hdfs dfsrouteradmin -ls /home/move 
> Mount Table Entries:
> Source Destinations Owner Group Mode 
> /home/move hdfs-oahu->/home/move me hadoop rwxr-xr-x
> me@gateway-hawaii-all:/mnt/host/bin$ hdfs dfsrouteradmin -ls /home/move/
> Mount Table Entries:
> Source Destinations Owner Group Mode
> me@gateway-hawaii-all:/mnt/host/bin$ hdfs dfsrouteradmin -rm /home/move/
> Cannot remove mount point /home/move/
> me@gateway-hawaii-all:/mnt/host/bin$ hdfs dfsrouteradmin -add /home/move/ 
> hdfs-oahu /home/move/ -readonly
> Cannot add mount point /home/move/
> The slash '/' at the end should be normalized before calling the API from the 
> CLI.
> Note: add command fails with a terminating '/' . when it is an existing entry 
> (it checks the not-normalized value with the normalized value in the 
> mount-table). Adding a new mount point with '/' at the end works because the 
> CLI normalizes the mount before calling the API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13326:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13499) RBF: Show disabled name services in the UI

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13499:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Show disabled name services in the UI
> --
>
> Key: HDFS-13499
> URL: https://issues.apache.org/jira/browse/HDFS-13499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13499.000.patch, disabledUI.png
>
>
> HDFS-13484 exposes the disabled name services. This JIRA should show them in 
> the Web UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13402:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch, 
> HDFS-13402.003.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The common
>  * implementation uses HDFS as a backend. The path can be specified setting
>  * dfs.federation.router.driver.fs.path=hdfs://host:port/path/to/store.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13045:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13384:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch, HDFS-13384.003.patch, HDFS-13384.004.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468333#comment-16468333
 ] 

genericqa commented on HDFS-13537:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
57s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13537 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922583/HDFS-13537.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e888bcc8d44d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8981674 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24158/testReport/ |
| Max. process+thread count | 684 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24158/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> 

[jira] [Updated] (HDFS-13410) RBF: Support federation with no subclusters

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13410:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Support federation with no subclusters
> ---
>
> Key: HDFS-13410
> URL: https://issues.apache.org/jira/browse/HDFS-13410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13410.000.patch, HDFS-13410.001.patch, 
> HDFS-13410.002.patch
>
>
> If the federation has no subclusters the logs have long stack traces. Even 
> though this is not a regular setup for RBF, we should trigger log message.
> An example:
> {code}
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.LinkedList.checkElementIndex(LinkedList.java:555)
>   at java.util.LinkedList.get(LinkedList.java:476)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1028)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDatanodeReport(RouterRpcServer.java:1264)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics.getNodeUsage(FederationMetrics.java:424)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13466) RBF: Add more router-related information to the UI

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13466:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13386:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, 
> HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13478) RBF: Disabled Nameservice store API

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13478:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Disabled Nameservice store API
> ---
>
> Key: HDFS-13478
> URL: https://issues.apache.org/jira/browse/HDFS-13478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13478.000.patch, HDFS-13478.001.patch, 
> HDFS-13478.002.patch, HDFS-13478.003.patch, HDFS-13478.004.patch, 
> HDFS-13478.005.patch
>
>
> We have a subcluster in our federation that is for testing and is 
> missbehaving. This has a negative impact on the performance with operations 
> that go to every subcluster (e.g., renewLease() or setSafeMode()).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-33:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution [~shashikant]. I have committed this to trunk.

> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-33.01.patch, HDFS-13454-HDFS-7240.000.patch, 
> HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13488:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Reject requests when a Router is overloaded
> 
>
> Key: HDFS-13488
> URL: https://issues.apache.org/jira/browse/HDFS-13488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13488.000.patch, HDFS-13488.001.patch, 
> HDFS-13488.002.patch, HDFS-13488.003.patch, HDFS-13488.004.patch
>
>
> A Router might be overloaded when handling special cases (e.g. a slow 
> subcluster). The Router could reject the requests and the client could try 
> with another Router. We should leverage the Standby mechanism for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13525) RBF: Add unit test TestStateStoreDisabledNameservice

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13525:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Add unit test TestStateStoreDisabledNameservice
> 
>
> Key: HDFS-13525
> URL: https://issues.apache.org/jira/browse/HDFS-13525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13525.001.patch
>
>
> Add unit test for the store for DisabledNameservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13503) Fix TestFsck test failures on Windows

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13503:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> Fix TestFsck test failures on Windows
> -
>
> Key: HDFS-13503
> URL: https://issues.apache.org/jira/browse/HDFS-13503
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13503-branch-2.000.patch, 
> HDFS-13503-branch-2.001.patch, HDFS-13503.000.patch, HDFS-13503.001.patch
>
>
> Test failures on Windows caused by the same reason as HDFS-13336, similar fix 
> needed for TestFsck basing on HDFS-13408.
> MiniDFSCluster also needs a small fix for the getStorageDir() interface, 
> which should use determineDfsBaseDir() to get the correct path of the data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13283:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> Percentage based Reserved Space Calculation for DataNode
> 
>
> Key: HDFS-13283
> URL: https://issues.apache.org/jira/browse/HDFS-13283
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13283.000.patch, HDFS-13283.001.patch, 
> HDFS-13283.002.patch, HDFS-13283.003.patch, HDFS-13283.004.patch, 
> HDFS-13283.005.patch, HDFS-13283.006.patch, HDFS-13283.007.patch, 
> HDFS-13283_branch-2.000.patch, HDFS-13283_branch-3.0.000.patch
>
>
> Currently, the only way to configure reserved disk space for non-HDFS data on 
> a DataNode is a constant value via {{dfs.datanode.du.reserved}}. This can be 
> an issue in non-heterogeneous clusters where size of DNs can differ. The 
> proposed solution is to allow percentage based configuration (and their 
> combination):
>  # ABSOLUTE
>  ** based on absolute number of reserved space
>  # PERCENTAGE
>  ** based on percentage of total capacity in the storage
>  # CONSERVATIVE
>  ** calculates both of the above and takes the one that will yield more 
> reserved space
>  # AGGRESSIVE
>  ** calculates 1. 2. and takes the one that will yield less reserved space
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468293#comment-16468293
 ] 

Xiao Liang edited comment on HDFS-13537 at 5/9/18 4:33 AM:
---

Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the patch, they 
don't call the method changed in the patch.


was (Author: surmountian):
Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the the patch, they 
don't call the method changed in the patch.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13484) RBF: Disable Nameservices from the federation

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13484:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Disable Nameservices from the federation
> -
>
> Key: HDFS-13484
> URL: https://issues.apache.org/jira/browse/HDFS-13484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13484.000.patch, HDFS-13484.001.patch, 
> HDFS-13484.002.patch, HDFS-13484.003.patch, HDFS-13484.004.patch, 
> HDFS-13484.005.patch, HDFS-13484.006.patch, HDFS-13484.007.patch, 
> HDFS-13484.008.patch, HDFS-13484.009.patch
>
>
> HDFS-13478 introduced the Decommission store. We should disable the access to 
> decommissioned subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13509:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch, HDFS-13509.002.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13336:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-05-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468310#comment-16468310
 ] 

Yongjun Zhang commented on HDFS-13380:
--

Hi [~elgoiri],

Thanks you guys for working on this, I found that this Jira is not in 
branch-3.0 but it 3.0.4 is in the Fix Version/s. Would you please put it into 
branch-3.0 if it's intended?

Thanks.

 

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13380:
-
Fix Version/s: (was: 3.0.3)

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13490:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Fix setSafeMode in the Router
> --
>
> Key: HDFS-13490
> URL: https://issues.apache.org/jira/browse/HDFS-13490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13490.000.patch, HDFS-13490.001.patch
>
>
> RouterRpcServer doesn't handle the isChecked parameter correctly when 
> forwarding setSafeMode to the namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13380:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.3
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13462:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch, HDFS-13462_branch-2.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468297#comment-16468297
 ] 

Mukul Kumar Singh commented on HDDS-33:
---

Thanks for the updated patch [~shashikant]. +1, the v1 patch looks good to me. 
I will commit this shortly.

> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-33.01.patch, HDFS-13454-HDFS-7240.000.patch, 
> HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468293#comment-16468293
 ] 

Xiao Liang commented on HDFS-13537:
---

Thanks [~elgoiri], sure, please help take a look at [^HDFS-13537.001.patch] 
with variable extracted.

In the test result, the failed cases seem not related with the the patch, they 
don't call the method changed in the patch.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468291#comment-16468291
 ] 

Xiao Chen commented on HDFS-13540:
--

added a test and next run should include hadoop-hdfs too

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: (was: HDFS-13540.01.patch)

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: HDFS-13540.01.patch

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: HDFS-13537.001.patch

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch, HDFS-13537.001.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468281#comment-16468281
 ] 

genericqa commented on HDFS-13322:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
67m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
50s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922563/HDFS-13322.003.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 26089ba405a5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 69aac69 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24157/testReport/ |
| Max. process+thread count | 338 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24157/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, 
> 

[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468277#comment-16468277
 ] 

genericqa commented on HDFS-13539:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13539 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922545/HDFS-13539.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f99378394b6a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 69aac69 |
| maven | version: 

[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468275#comment-16468275
 ] 

genericqa commented on HDFS-13537:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
54s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 58s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 450 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468230#comment-16468230
 ] 

genericqa commented on HDDS-28:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
72m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 33m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922557/o28_20180507c.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 75ad2216afa2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 69aac69 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/57/testReport/ |
| Max. process+thread count | 996 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools hadoop-tools/hadoop-ozone hadoop-dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/57/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13533) RBF: Configuration for RBF in namenode/datanode

2018-05-08 Thread Sophie Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468215#comment-16468215
 ] 

Sophie Wang commented on HDFS-13533:


Yes, this is working for nn/dn. But then how can I configure router client in 
nn/dn?

> RBF: Configuration for RBF in namenode/datanode
> ---
>
> Key: HDFS-13533
> URL: https://issues.apache.org/jira/browse/HDFS-13533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sophie Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13533) RBF: Configuration for RBF in namenode/datanode

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468211#comment-16468211
 ] 

Íñigo Goiri commented on HDFS-13533:


I understand the issue now. Can you try using:
{code}

dfs.internal.nameservices
ns0,ns1,ns2,ns3

{code}
If so, then we should clarify the documentation.

> RBF: Configuration for RBF in namenode/datanode
> ---
>
> Key: HDFS-13533
> URL: https://issues.apache.org/jira/browse/HDFS-13533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sophie Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13452) Some Potential NPE

2018-05-08 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468208#comment-16468208
 ] 

lujie commented on HDFS-13452:
--

ping—>

Hope some review

> Some Potential NPE 
> ---
>
> Key: HDFS-13452
> URL: https://issues.apache.org/jira/browse/HDFS-13452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Major
> Attachments: HDFS-13542_1.patch
>
>
> We have developed a static analysis tool 
> [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential 
> NPE, just as descroped in HDFS-13451. We found another two bug or bad 
> practice after improve the tool.
> attach the patch here



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13533) RBF: Configuration for RBF in namenode/datanode

2018-05-08 Thread Sophie Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468206#comment-16468206
 ] 

Sophie Wang commented on HDFS-13533:


For example, if you put the configure:

  
dfs.nameservices
ns0,ns1,ns2,ns3,ns-fed
  
  
dfs.ha.namenodes.ns-fed
r1,r2
  
  
dfs.namenode.rpc-address.ns-fed.r1
router1:rpc-port
  
  
dfs.namenode.rpc-address.ns-fed.r2
router2:rpc-port
  
  
dfs.client.failover.proxy.provider.ns-fed

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  
  
dfs.client.failover.random.order
true
  

In hdfs-site.xml before you start namenode or datanode process, namenode and 
datanode would fail to start. I think this is because nn/dn will treat ns-fed 
as one of the nameservice and try to connect router1:rpc-port as nn.

 

This means if I want to use router in nn/dn, I need to change hdfs-site.xml 
after the nn/dn started. Otherwise I can not use router in nn/dn

> RBF: Configuration for RBF in namenode/datanode
> ---
>
> Key: HDFS-13533
> URL: https://issues.apache.org/jira/browse/HDFS-13533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sophie Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468183#comment-16468183
 ] 

genericqa commented on HDFS-13540:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922549/HDFS-13540.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5e2695ec97dc 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 69aac69 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24156/testReport/ |
| Max. process+thread count | 448 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24156/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DFSStripedInputStream should not 

[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-08 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Status: Patch Available  (was: Open)

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, 
> test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-08 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Status: Open  (was: Patch Available)

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, 
> test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-08 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468173#comment-16468173
 ] 

Istvan Fajth commented on HDFS-13322:
-

Adding patch v3, after review myself I have realized that we do need to check 
for the Environment only if the authentication method is Kerberos 
authentication, otherwise the connect should not need to deal with any ticket 
cache path. Also it is not worth to check the kpath in a non kerberized 
environment.

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, 
> test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-08 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Attachment: HDFS-13322.003.patch

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, 
> test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468158#comment-16468158
 ] 

Íñigo Goiri commented on HDFS-13537:


[~surmountian] for  [^HDFS-13537.000.patch] can you skip the pom.xml fixes?
For the new file path, can we extract the variable?

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468142#comment-16468142
 ] 

genericqa commented on HDFS-13534:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
60m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m  
1s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13534 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922534/HDFS-13534.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9451dfcecccb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1ef0a1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24153/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24153/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468139#comment-16468139
 ] 

Tsz Wo Nicholas Sze commented on HDDS-28:
-

Here is a new patch: o28_20180507c.patch

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch, 
> o28_20180507c.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-08 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-28:

Attachment: o28_20180507c.patch

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch, 
> o28_20180507c.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468134#comment-16468134
 ] 

Tsz Wo Nicholas Sze commented on HDDS-28:
-

Thanks [~nandakumar131].  Good catch!  Will remove the duplicate dependency in 
hadoop-dist/pom.xml.

> This change is not necessary, as the scope of all hdds dependencies in this 
> project are defined in its parent pom hadoop-ozone/pom.xml

It won't inherit from the parent.  We need to use dependencyManagement for such 
purpose.   Just tried to generate the dependency tree and it shows that the 
scopes are "compile" but not "provided".
{code}
[INFO] Building Apache Hadoop Ozone Tools 0.2.1-SNAPSHOT   [97/112]
[INFO] [ jar ]-
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ 
hadoop-ozone-tools ---
[INFO] org.apache.hadoop:hadoop-ozone-tools:jar:0.2.1-SNAPSHOT
[INFO] +- org.apache.hadoop:hadoop-ozone-common:jar:0.2.1-SNAPSHOT:provided
[INFO] +- org.apache.hadoop:hadoop-ozone-client:jar:0.2.1-SNAPSHOT:provided
[INFO] +- io.dropwizard.metrics:metrics-core:jar:3.2.4:compile
[INFO] |  \- org.slf4j:slf4j-api:jar:1.7.25:compile
[INFO] +- org.apache.hadoop:hadoop-hdds-server-scm:jar:0.2.1-SNAPSHOT:compile 
<--
[INFO] |  +- org.hamcrest:hamcrest-all:jar:1.3:compile
[INFO] |  +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] |  \- com.google.guava:guava:jar:11.0.2:compile
[INFO] +- org.apache.hadoop:hadoop-hdds-common:jar:0.2.1-SNAPSHOT:compile 
<--
{code}

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Status: Patch Available  (was: Open)

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: HDFS-13537.000.patch

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537.000.patch
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: HDFS-13540.01.patch

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Status: Patch Available  (was: Open)

throwing patch 1 here to run a pre-commit

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-08 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13540:


 Summary: DFSStripedInputStream should not allocate new buffers 
during close / unbuffer
 Key: HDFS-13540
 URL: https://issues.apache.org/jira/browse/HDFS-13540
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen


This was found in the same scenario where HDFS-13539 is caught.

There are 2 OOM that looks interesting:
{noformat}
FSDataInputStream#close error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
{noformat}
and 
{noformat}
org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at 
org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
at 
org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
{noformat}

As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
buffer pool. We could save the cost of doing so if it's just a close or 
unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468073#comment-16468073
 ] 

Xiao Chen commented on HDFS-13539:
--

It's not clear to me why this cannot happen to DFSInputStream, so the fix 
applies to both.



> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object.
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468064#comment-16468064
 ] 

Íñigo Goiri commented on HDFS-13537:


Thanks [~surmountian], in the report they all show as failing 
([here|https://builds.apache.org/job/hadoop-trunk-win/460/testReport/org.apache.hadoop.fs.http.client/TestHttpFSFWithWebhdfsFileSystem/]).
These are 62 failures right now.


> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13539:
-
Attachment: HDFS-13539.01.patch

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object.
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13539:
-
Status: Patch Available  (was: Open)

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object.
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-08 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13539:


 Summary: DFSInputStream NPE when reportCheckSumFailure
 Key: HDFS-13539
 URL: https://issues.apache.org/jira/browse/HDFS-13539
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen
 Attachments: HDFS-13539.01.patch

We have seem the following exception with DFSStripedInputStream.
{noformat}
readDirect: FSDataInputStream#read error:
NullPointerException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
{noformat}
Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the only 
possible null object.

Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13538) HDFS DiskChecker should handle disk full situation

2018-05-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13538:
-
Summary: HDFS DiskChecker should handle disk full situation  (was: 
DiskChecker should handle disk full situation)

> HDFS DiskChecker should handle disk full situation
> --
>
> Key: HDFS-13538
> URL: https://issues.apache.org/jira/browse/HDFS-13538
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDFS-13538) DiskChecker should handle disk full situation

2018-05-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HDDS-35 to HDFS-13538:
--

Workflow: no-reopen-closed, patch-avail  (was: patch-available, re-open 
possible)
 Key: HDFS-13538  (was: HDDS-35)
 Project: Hadoop HDFS  (was: Hadoop Distributed Data Store)

> DiskChecker should handle disk full situation
> -
>
> Key: HDFS-13538
> URL: https://issues.apache.org/jira/browse/HDFS-13538
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-35) DiskChecker should handle disk full situation

2018-05-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-35?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HADOOP-15451 to HDDS-35:


Target Version/s:   (was: 3.1.1, 2.9.2, 3.0.3, 2.8.5)
Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
 Key: HDDS-35  (was: HADOOP-15451)
 Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> DiskChecker should handle disk full situation
> -
>
> Key: HDDS-35
> URL: https://issues.apache.org/jira/browse/HDDS-35
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
> When space is low, the os returns ENOSPC. Instead simply stop writing, the 
> drive is marked bad and replication happens. This make cluster-wide space 
> problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
> datanode shuts down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468017#comment-16468017
 ] 

Nanda kumar commented on HDDS-28:
-

Thanks [~szetszwo] for reporting and working on the issue. Overall the patch 
looks good to me, some minor comments

*hadoop-dist/pom.xml*
There is duplicate dependency tag for {{hadoop-hdds-tools}}. First in 239 - 242 
and the second in 254 to 257, the second one can be removed.

*hadoop-ozone/tools/pom.xml*
This change is not necessary, as the scope of all {{hdds}} dependencies in this 
project are defined in its parent pom {{hadoop-ozone/pom.xml}}

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-08 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468007#comment-16468007
 ] 

James Clampffer commented on HDFS-13534:


Thanks for looking at this [~anatoli.shein].  I attached a new patch.  The npm 
stuff wasn't supposed to be part of this; I comment that out on my working tree 
because container builds always seem to hang there for some reason.

I'm not able to reproduce the override warning you're seeing, what 
compiler/version are you using?  Right now I think the important part is to get 
functional included since that breaks the build.  The warnings fixed here are 
the ones I was able to hit when using clang in the docker container.

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-08 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13534:
---
Attachment: HDFS-13534.001.patch

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-08 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467991#comment-16467991
 ] 

Xiaoyu Yao commented on HDDS-6:
---

Thanks [~ajayydv] for the update. Patch v2 looks good to me. Just few more 
minor issue, +1 after that being fixed.

Can you fix the Jenkins shellcheck and unit failure 
(testCompareXmlAgainstConfigurationClass) that are related?

StorageContainerManager.java 
Line 277: NIT: logon->login

ozone-default.xml
Line 1054: "This property is dependent up on hadoop.security.authentication".  
=> "When this property is true, hadoop.security.authentication should be 
Kerberos".

Line 1060: can we leave the default empty like other hadoop services? (You will 
need to update the unit test to explicitly set these keys)
The description can be reworded like: 
"The keytab file used by each SCM daemon to login as its
service principal. The principal name is configured with
ozone.scm.kerberos.principal."

Line 1060: Leave the default empty. The description does not match the key. 
Suggested description: "
"The SCM service principal. This is typically set to
scm/_h...@realm.tld. Each SCM will substitute _HOST with its
own fully qualified hostname at startup. The _HOST placeholder
allows using the same configuration setting on both SCMs
in an HA setup."

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467989#comment-16467989
 ] 

Xiao Liang commented on HDFS-13537:
---

The failed tests related to this in Windows are:

org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperationDoAs[*]
org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem.testOperation[*]
org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem.testOperationDoAs[*]

I'm preparing a patch of fix to upload.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467985#comment-16467985
 ] 

Nanda kumar commented on HDDS-18:
-

Thanks [~ljain] for updating the patch. The updated patch looks good to me, 
some very minor comments/NITs.

GetKeyHandler:118 Typo {{acccess}}
RestClient:47 Unused import
RestClient:198 Line length greater than 80
OzoneKey:21 & 23 Unused imports, there are no other changes in this file.
OzoneClientUtils: javadoc missing for the class, some javadoc to methods will 
also be useful
OzoneClientUtils: Since it's a utility class mark this as final and add a 
private constructor
OzoneClientUtils:33, 36 & 42 line length greater than 80
ListVolumeHandler:33 Unused import
InfoKeyHandler:28 import statement should be expanded


> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467955#comment-16467955
 ] 

genericqa commented on HDDS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 8s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
46s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-ozone/common in HDDS-4 has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
47s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
37s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | 

[jira] [Created] (HDDS-34) Remove meta file during creation of container

2018-05-08 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-34:
--

 Summary: Remove meta file during creation of container
 Key: HDDS-34
 URL: https://issues.apache.org/jira/browse/HDDS-34
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


During container creation, a .container and .meta files are created.

.meta file stores container file name and hash. This file is not required.

This Jira is an attempt to clean up the usage of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13478) RBF: Disabled Nameservice store API

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467887#comment-16467887
 ] 

Íñigo Goiri commented on HDFS-13478:


Sorry for the mess [~yzhangal], the right fix version would be 3.0.3.

> RBF: Disabled Nameservice store API
> ---
>
> Key: HDFS-13478
> URL: https://issues.apache.org/jira/browse/HDFS-13478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13478.000.patch, HDFS-13478.001.patch, 
> HDFS-13478.002.patch, HDFS-13478.003.patch, HDFS-13478.004.patch, 
> HDFS-13478.005.patch
>
>
> We have a subcluster in our federation that is for testing and is 
> missbehaving. This has a negative impact on the performance with operations 
> that go to every subcluster (e.g., renewLease() or setSafeMode()).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467871#comment-16467871
 ] 

genericqa commented on HDDS-33:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 30s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.impl.TestContainerDeletionChoosingPolicy |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-33 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922508/HDDS-33.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux abab4df45343 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d72c1651 |
| maven | 

[jira] [Commented] (HDFS-13478) RBF: Disabled Nameservice store API

2018-05-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467849#comment-16467849
 ] 

Yongjun Zhang commented on HDFS-13478:
--

HI [~elgoiri], [~linyiqun],

Thanks for your work on RBF.

We don't have release 3.0.3 yet, so all the jiras whose Fix Verion/s are set to 
3.0.4 would really be 3.0.3. Wonder if you intend to put them into 3.0.4 
instead of 3.0.3 for the RBF fixes?

If you really meant for 3.0.3, we will need to change the Fix Version/s field 
of these jiras to 3.0.3. Would you please let me know ASAP?

Thanks.

> RBF: Disabled Nameservice store API
> ---
>
> Key: HDFS-13478
> URL: https://issues.apache.org/jira/browse/HDFS-13478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13478.000.patch, HDFS-13478.001.patch, 
> HDFS-13478.002.patch, HDFS-13478.003.patch, HDFS-13478.004.patch, 
> HDFS-13478.005.patch
>
>
> We have a subcluster in our federation that is for testing and is 
> missbehaving. This has a negative impact on the performance with operations 
> that go to every subcluster (e.g., renewLease() or setSafeMode()).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467844#comment-16467844
 ] 

Íñigo Goiri commented on HDFS-13537:


Thanks [~surmountian], can you point to the failed unit tests in the daily 
Windows build?
Here is the 
[report|https://builds.apache.org/job/hadoop-trunk-win/460/testReport/].
Notice there are currently 754 for Hadoop and 264 of them seem to be related to 
HDFs.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13537:
---
Summary: TestHdfsHelper does not generate jceks path properly for relative 
path in Windows  (was: TestHdfsHelper does not generate jceks path properly for 
relative path)

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-05-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Description: Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work 
with HDFS NN configured in high availability.   (was: Provide HA for the 
{{InMemoryLevelDBAliasMapServer} to work with HDFS NN configured in high 
availability.)

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path

2018-05-08 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang reassigned HDFS-13537:
-

Assignee: Xiao Liang

> TestHdfsHelper does not generate jceks path properly for relative path
> --
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path

2018-05-08 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13537:
-

 Summary: TestHdfsHelper does not generate jceks path properly for 
relative path
 Key: HDFS-13537
 URL: https://issues.apache.org/jira/browse/HDFS-13537
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang


In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
{code:java}
final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
while the path from getTestRootDir() is a relative path (in windows), the 
result will be incorrect due to no "/" between "://file" and the relative path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-05-08 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13536:
-

 Summary: [PROVIDED Storage] HA for InMemoryAliasMap
 Key: HDFS-13536
 URL: https://issues.apache.org/jira/browse/HDFS-13536
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


Provide HA for the {{InMemoryLevelDBAliasMapServer} to work with HDFS NN 
configured in high availability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13435) RBF: Improve the error loggings for printing the stack trace

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13435:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: Improve the error loggings for printing the stack trace
> 
>
> Key: HDFS-13435
> URL: https://issues.apache.org/jira/browse/HDFS-13435
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13435.001.patch, HDFS-13435.002.patch, 
> HDFS-13435.003.patch
>
>
> There are many places that using {{Logger.error(String format, Object... 
> arguments)}} incorrectly.
>  A example:
> {code:java}
> LOG.error("Cannot remove {}", path, e);
> {code}
> The exception passed here is no meaning and won't be printed. Actually it 
> should be update to
> {code:java}
> LOG.error("Cannot remove {}: {}.", path, e.getMessage());
> {code}
> or 
> {code:java}
> LOG.error("Cannot remove " +  path, e));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13453:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: getMountPointDates should fetch latest subdir time/date when parent dir 
> is not present but /parent/child dirs are present in mount table
> -
>
> Key: HDFS-13453
> URL: https://issues.apache.org/jira/browse/HDFS-13453
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13453-000.patch, HDFS-13453-001.patch, 
> HDFS-13453-002.patch, HDFS-13453-003.patch
>
>
> [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling 
> the case when /parent in not present in mount table but /parent/subdir is in 
> mount table.
> In this case getMountPointDates is not able to fetch the latest time for 
> /parent as /parent is not present in mount table.
> For this scenario we will display latest modified subdir date/time as /parent 
> modified time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13353:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13353.1.patch, HDFS-13353.2.patch, 
> HDFS-13353.3.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): 

[jira] [Commented] (HDFS-13535) Fix libhdfs++ doxygen build

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467799#comment-16467799
 ] 

genericqa commented on HDFS-13535:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
56m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
38s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13535 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922483/HDFS-13535.0.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  cc  |
| uname | Linux e53f8444d677 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d72c1651 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24152/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24152/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix libhdfs++ doxygen build
> ---
>
> Key: HDFS-13535
> URL: https://issues.apache.org/jira/browse/HDFS-13535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Mitchell Tracy
>

[jira] [Commented] (HDFS-13533) Configuration for RBF in namenode/datanode

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467789#comment-16467789
 ] 

Íñigo Goiri commented on HDFS-13533:


Thanks [~CrazyLady] for reporting this.
The one you report in your first comment is related to HADOOP-14741.
We intentionally moved all the ZK endpoint setting to core-site.xml because 
different components may use the same endpoint.
That being said, after checking the documentation, this is not very clear.
I would use this JIRA to improve the documentation and hdfs-site.xml to point 
to the option in core-site.xml.

bq. For namenode/datanode. If add client router configuration, 
namenode/datanode would failed to start. Suggest to move client router 
configuration to 

Can you give more details?
We have that setup internally and is working with no issues; can you give 
details on the error.

> Configuration for RBF in namenode/datanode
> --
>
> Key: HDFS-13533
> URL: https://issues.apache.org/jira/browse/HDFS-13533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sophie Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13533) RBF: Configuration for RBF in namenode/datanode

2018-05-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13533:
---
Summary: RBF: Configuration for RBF in namenode/datanode  (was: 
Configuration for RBF in namenode/datanode)

> RBF: Configuration for RBF in namenode/datanode
> ---
>
> Key: HDFS-13533
> URL: https://issues.apache.org/jira/browse/HDFS-13533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sophie Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-33:

Attachment: HDDS-33.01.patch

> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-33.01.patch, HDFS-13454-HDFS-7240.000.patch, 
> HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13429) libhdfs++ Expose a C++ logging API

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467734#comment-16467734
 ] 

genericqa commented on HDFS-13429:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
62m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 56s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13429 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922472/HDFS-13429.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 652784be7f22 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d72c1651 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24151/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24151/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24151/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24151/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++ Expose a C++ logging API
> --
>
> Key: HDFS-13429
> URL: https://issues.apache.org/jira/browse/HDFS-13429
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>

[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: HDDS-6-HDDS-4.01.patch

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: (was: HDDS-4-HDDS-6.01.patch)

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-30) Fix TestContainerSQLCli

2018-05-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-30:
---

Assignee: Shashikant Banerjee

> Fix TestContainerSQLCli
> ---
>
> Key: HDDS-30
> URL: https://issues.apache.org/jira/browse/HDDS-30
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Shashikant Banerjee
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12981) renameSnapshot a Non-Existent snapshot to itself should throw error

2018-05-08 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12981:
-
Fix Version/s: (was: 3.0.4)
   3.0.3

> renameSnapshot a Non-Existent snapshot to itself should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch, HDFS-12981.004.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467707#comment-16467707
 ] 

genericqa commented on HDDS-33:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-33 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-33 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920994/HDFS-13454.000.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/54/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDFS-13454-HDFS-7240.000.patch, HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-33) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh moved HDFS-13454 to HDDS-33:
--

   Fix Version/s: (was: HDFS-7240)
  0.2.1
Target Version/s:   (was: HDFS-7240)
 Component/s: (was: ozone)
  Ozone Manager
Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
 Key: HDDS-33  (was: HDFS-13454)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDDS-33
> URL: https://issues.apache.org/jira/browse/HDDS-33
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDFS-13454-HDFS-7240.000.patch, HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13454) Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey

2018-05-08 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13454:
-
Issue Type: Test  (was: Sub-task)
Parent: (was: HDFS-7240)

> Ozone : Fix the test logic in TestKeySpaceManager#testDeleteKey
> ---
>
> Key: HDFS-13454
> URL: https://issues.apache.org/jira/browse/HDFS-13454
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: ozone
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13454-HDFS-7240.000.patch, HDFS-13454.000.patch
>
>
> The test logic in TestKeySpaceManager#testDeleteKey seems to be wrong. The 
> test validates the keyArgs instead of blockId to make sure the key gets 
> deleted from SCM. Also, after the first exception validation , the subsequent 
> statements in the junit never gets executed here.
> {code:java}
> keys.add(keyArgs.getResourceName());
> exception.expect(IOException.class);
> exception.expectMessage("Specified block key does not exist");
> cluster.getStorageContainerManager().getBlockLocations(keys);
> // Delete the key again to test deleting non-existing key.
> // These will never get executed.
> exception.expect(IOException.class);
> exception.expectMessage("KEY_NOT_FOUND");
> storageHandler.deleteKey(keyArgs);
> Assert.assertEquals(1 + numKeyDeleteFails,
> ksmMetrics.getNumKeyDeletesFails());{code}
> The test needs to be modified to address all these.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-31) Fix TestSCMCli

2018-05-08 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-31:
---

Assignee: Lokesh Jain

> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>
> [ERROR]   TestSCMCli.testListContainerCommand:406
> [ERROR] Errors:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13535) Fix libhdfs++ doxygen build

2018-05-08 Thread Mitchell Tracy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467641#comment-16467641
 ] 

Mitchell Tracy commented on HDFS-13535:
---

The above patch can be run from hadoop-hdfs-project/hadoop-hdfs-native-client 
via:  {code:bash}mvn package -Pdoc{code} It generates HTML documentation at 
hadoop-hdfs-project/hadoop-hdfs-native-client/target/doc/html

> Fix libhdfs++ doxygen build
> ---
>
> Key: HDFS-13535
> URL: https://issues.apache.org/jira/browse/HDFS-13535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Mitchell Tracy
>Priority: Major
> Attachments: HDFS-13535.0.patch
>
>
> Currently, the doxygen build for libhdfs++ doesn't include all of the 
> necessary source directories. In addition, the build does not generate the 
> actual html documentation. So the fix is to include all the required source 
> directories when generating the doxyfile, and then add maven for generating 
> the html documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13535) Fix libhdfs++ doxygen build

2018-05-08 Thread Mitchell Tracy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell Tracy updated HDFS-13535:
--
Attachment: HDFS-13535.0.patch
Status: Patch Available  (was: Open)

> Fix libhdfs++ doxygen build
> ---
>
> Key: HDFS-13535
> URL: https://issues.apache.org/jira/browse/HDFS-13535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.2
>Reporter: Mitchell Tracy
>Priority: Major
> Attachments: HDFS-13535.0.patch
>
>
> Currently, the doxygen build for libhdfs++ doesn't include all of the 
> necessary source directories. In addition, the build does not generate the 
> actual html documentation. So the fix is to include all the required source 
> directories when generating the doxyfile, and then add maven for generating 
> the html documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-08 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467504#comment-16467504
 ] 

Anatoli Shein edited comment on HDFS-13534 at 5/8/18 4:01 PM:
--

[~James C], is the below line commented out on purpose?
{code:java}
npm install -g ember-cli{code}
Also, the following line is giving me an error "marked override, but does not 
override":
{code:java}
ProducerResult Produce() override = 0;{code}


was (Author: anatoli.shein):
[~James C], is the below line commented out on purpose?
{code:java}
npm install -g ember-cli
{code}

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-13534.000.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >