[jira] [Commented] (HDFS-13890) Allow Delimited PB OIV tool to print out INodeReferences

2018-09-03 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602628#comment-16602628
 ] 

Xiao Chen commented on HDFS-13890:
--

Thanks for reporting and looking into this [~adam.antal]. Improvements sound ok.

Because the added column will change the output format, we may leave it off by 
default, and use an optional flag to turn this behavior on. I don't have 
concerns on passing the refIdList object around - as long as it doesn't 
significantly increase memory consumption I think it should be fine. :)

> Allow Delimited PB OIV tool to print out INodeReferences
> 
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13885) Add debug logs in dfsclient around decrypting EDEK

2018-09-03 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13885:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.
Thanks for the contribution, Kitti.

> Add debug logs in dfsclient around decrypting EDEK
> --
>
> Key: HDFS-13885
> URL: https://issues.apache.org/jira/browse/HDFS-13885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13885.001.patch, HDFS-13885.002.patch, 
> HDFS-13885.003.patch
>
>
> We want to know from the hdfs client log (e.g. hbase RS logs) for each 
> CryptoOutputstream, approximately when does the decrypt happen and when does 
> the file read happen, to help us rule out or identify hdfs NN / kms / DN 
> being slow.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13885) Add debug logs in dfsclient around decrypting EDEK

2018-09-03 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602617#comment-16602617
 ] 

Xiao Chen commented on HDFS-13885:
--

+1

> Add debug logs in dfsclient around decrypting EDEK
> --
>
> Key: HDFS-13885
> URL: https://issues.apache.org/jira/browse/HDFS-13885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13885.001.patch, HDFS-13885.002.patch, 
> HDFS-13885.003.patch
>
>
> We want to know from the hdfs client log (e.g. hbase RS logs) for each 
> CryptoOutputstream, approximately when does the decrypt happen and when does 
> the file read happen, to help us rule out or identify hdfs NN / kms / DN 
> being slow.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13885) Add debug logs in dfsclient around decrypting EDEK

2018-09-03 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13885:
-
Summary: Add debug logs in dfsclient around decrypting EDEK  (was: Improve 
debugging experience of dfsclient decrypts)

> Add debug logs in dfsclient around decrypting EDEK
> --
>
> Key: HDFS-13885
> URL: https://issues.apache.org/jira/browse/HDFS-13885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13885.001.patch, HDFS-13885.002.patch, 
> HDFS-13885.003.patch
>
>
> We want to know from the hdfs client log (e.g. hbase RS logs) for each 
> CryptoOutputstream, approximately when does the decrypt happen and when does 
> the file read happen, to help us rule out or identify hdfs NN / kms / DN 
> being slow.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13893:


 Summary: DiskBalancer: no validations for Disk balancer commands 
 Key: HDFS-13893
 URL: https://issues.apache.org/jira/browse/HDFS-13893
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{Scenario:-}}

 
 1 Run the Disk Balancer commands with extra arguments passing  

{noformat} 
hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
*sgfsdgfs*
2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
hostname:50077
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
hostname:50077 took 23 ms
2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
2018-08-31 14:57:35,457 INFO command.Command: 
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
Writing plan to:
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
{noformat} 

Expected Output:- 
=
Disk balancer commands should be fail if we pass any invalid arguments or extra 
arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13892) Disk Balancer : Invalid exit code for disk balancer execute command

2018-09-03 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy updated HDFS-13892:
-
Description: 
{{scenario:-}}

1. Write some 5GB data with one DISK
 2. Add one more non-empty Disk to above Datanode 
 3.Run the plan command for the above specific datanode 
 4. run the Execute command with the above plan file
 the above execute command not happened as per the datanode log
{noformat}
ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination volume: 
file:/Test_Disk/DISK2/ does not have enough space to accommodate a block. Block 
Size: 268435456 Exiting from copyBlocks.
{noformat}
5. see the exit code for execute command, it display the 0

{{Expected Result :-}}

1. Exit code should be 1 why means execution was not happened 
 2. In this type of scenario In console print the that error message that time 
customer/user knows execute was not happened.

  was:
{{scenario:-}}

1. Write some 5GB data with one DISK
2. Add one more non-empty Disk to above Datanode 
3.Run the plan command for the above specific datanode 
4. run the Execute command with the above plan file
the above execute command not happened as per the datanode log 
{noformat}
ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination volume: 
file:/Test_Disk/DISK2/ does not have enough space to accommodate a block. Block 
Size: 268435456 Exiting from copyBlocks.
{noformat}
5. see the exit code for execute command, it display the 0

{{Expected Result :-}}

1. Exit code should be 1 why means execute command was not happened 
2. In this type of scenario In console print the that error message that time 
customer/user knows execute was not happened. 



> Disk Balancer : Invalid exit code for disk balancer execute command
> ---
>
> Key: HDFS-13892
> URL: https://issues.apache.org/jira/browse/HDFS-13892
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {{scenario:-}}
> 1. Write some 5GB data with one DISK
>  2. Add one more non-empty Disk to above Datanode 
>  3.Run the plan command for the above specific datanode 
>  4. run the Execute command with the above plan file
>  the above execute command not happened as per the datanode log
> {noformat}
> ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination 
> volume: file:/Test_Disk/DISK2/ does not have enough space to accommodate a 
> block. Block Size: 268435456 Exiting from copyBlocks.
> {noformat}
> 5. see the exit code for execute command, it display the 0
> {{Expected Result :-}}
> 1. Exit code should be 1 why means execution was not happened 
>  2. In this type of scenario In console print the that error message that 
> time customer/user knows execute was not happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13892) Disk Balancer : Invalid exit code for disk balancer execute command

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13892:


 Summary: Disk Balancer : Invalid exit code for disk balancer 
execute command
 Key: HDFS-13892
 URL: https://issues.apache.org/jira/browse/HDFS-13892
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{scenario:-}}

1. Write some 5GB data with one DISK
2. Add one more non-empty Disk to above Datanode 
3.Run the plan command for the above specific datanode 
4. run the Execute command with the above plan file
the above execute command not happened as per the datanode log 
{noformat}
ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination volume: 
file:/Test_Disk/DISK2/ does not have enough space to accommodate a block. Block 
Size: 268435456 Exiting from copyBlocks.
{noformat}
5. see the exit code for execute command, it display the 0

{{Expected Result :-}}

1. Exit code should be 1 why means execute command was not happened 
2. In this type of scenario In console print the that error message that time 
customer/user knows execute was not happened. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-399) Handle pipeline discovery on SCM restart.

2018-09-03 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602579#comment-16602579
 ] 

Mukul Kumar Singh commented on HDDS-399:


Adding a WIP patch, also working on adding a test for this patch.

> Handle pipeline discovery on SCM restart.
> -
>
> Key: HDDS-399
> URL: https://issues.apache.org/jira/browse/HDDS-399
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-399.001.patch
>
>
> On SCM restart, as part on node registration, SCM should find out the list on 
> open pipeline on the node. Once all the nodes of the pipeline have reported 
> back, they should be added as active pipelines for further allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-399) Handle pipeline discovery on SCM restart.

2018-09-03 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-399:
---
Attachment: HDDS-399.001.patch

> Handle pipeline discovery on SCM restart.
> -
>
> Key: HDDS-399
> URL: https://issues.apache.org/jira/browse/HDDS-399
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-399.001.patch
>
>
> On SCM restart, as part on node registration, SCM should find out the list on 
> open pipeline on the node. Once all the nodes of the pipeline have reported 
> back, they should be added as active pipelines for further allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-399) Handle pipeline discovery on SCM restart.

2018-09-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-399:
--

 Summary: Handle pipeline discovery on SCM restart.
 Key: HDDS-399
 URL: https://issues.apache.org/jira/browse/HDDS-399
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


On SCM restart, as part on node registration, SCM should find out the list on 
open pipeline on the node. Once all the nodes of the pipeline have reported 
back, they should be added as active pipelines for further allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-315:
---
Status: Open  (was: Patch Available)

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch, HDDS-315.002.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602568#comment-16602568
 ] 

Dinesh Chitlangia commented on HDDS-315:


[~msingh] Thanks for your review comments. As discussed, we agreed that we need 
to investigate and find out where the trailing / is getting dropped so we can 
address the problem at source and fix it instead of fiddling with it in 
KeyManagerImpl#lookupKey. I will get back with more details and possibly a 
patch soon.

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch, HDDS-315.002.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Sree Vaddi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602547#comment-16602547
 ] 

Sree Vaddi commented on HDDS-395:
-

[~anu] [~elek] [~msingh]

brain storming:

how about coming up with a config validation class/method instead ?

instead stopping at, if folders exists.

what if, we check if the config files exists, in those folders.

and they are valid config files. (xml or json or key=value validation)

and they have the bare minimum set of keys exists and their values are valid. 
(data dir, cpu, ram, etc...)

make it re-usable, so it is the only and the best way to validate 
hadoop/hdds/ozone config files ?

make it extendable, so other integrating systems can add their own 'bare 
minimum' set of keys & values validations.

 

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-09-03 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602540#comment-16602540
 ] 

LiXin Ge commented on HDDS-336:
---

Thanks [~elek] for committing this.  I'd be happy to participate in Ozone 
improvment, please feel free to let me know if you have any demand.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602537#comment-16602537
 ] 

Mukul Kumar Singh commented on HDDS-315:


Thanks for working on this [~dineshchitlangia]. Can you please add the reason 
for info key failing for info key with a "/" appended at the end of the keyname 
? For example the following request ?

{code}
hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Lookup key failed, error:KEY_NOT_FOUND
{code}

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch, HDDS-315.002.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602532#comment-16602532
 ] 

Anu Engineer commented on HDDS-395:
---

I was thinking about it a bit more, may be the right fix is what [~elek] 
suggested. Let us make the system not fail even if both (OS Env. and JVM 
Properties) are not set. It is a trivial change in the DBConfigFile.java.

Then we don't have explicit dependency on this CONFIG in java code path (but we 
will fail if this is not setup, unfortunately).

 

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-395:
--
Fix Version/s: 0.2.1

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-03 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-289:
-
Status: Patch Available  (was: Open)

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-03 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-289:
-
Status: Open  (was: Patch Available)

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-395:

Labels: pull-request-available  (was: )

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: pull-request-available
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602505#comment-16602505
 ] 

ASF GitHub Bot commented on HDDS-395:
-

GitHub user sreev opened a pull request:

https://github.com/apache/hadoop/pull/410

add non-empty folder when test cannot find HADOOP_CONF_DIR [https://i…

…ssues.apache.org/jira/browse/HDDS-395] - Sree Vaddi

this is to avoid throwing exception that fails cluster from starting.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sreev/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/410.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #410


commit 48e4ce892a73e3545e6ce22fadb407057661ddf2
Author: Sree Vaddi <441385+sreev@...>
Date:   2018-09-04T00:32:40Z

add non-empty folder when test cannot find HADOOP_CONF_DIR 
[https://issues.apache.org/jira/browse/HDDS-395] - Sree Vaddi




> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: pull-request-available
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-03 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602473#comment-16602473
 ] 

Tsz Wo Nicholas Sze edited comment on HDDS-383 at 9/3/18 10:40 PM:
---

Thanks [~shashikant].

- Should the index be set to currentStreamIndex instead of 0?
{code}
+ListIterator streamEntryIterator =
+streamEntries.listIterator(currentStreamIndex);
+int index = 0;
{code}

- In both discardPreallocatedBlocks(..) and removeEmptyBlocks(), the index 
should not be incremented if remove(index) is called.

- How about combining locationInfoList and streamEntries into one list?


was (Author: szetszwo):
Thanks [~shashikant].

- Should the index be set to currentStreamIndex instead of 0?
{code}
+ListIterator streamEntryIterator =
+streamEntries.listIterator(currentStreamIndex);
+int index = 0;
{code}

- In both discardPreallocatedBlocks(..) and removeEmptyBlocks(), the index 
should not be increamented if remove(index) is called.

- How about combining locationInfoList and streamEntries into one list?

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-03 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602473#comment-16602473
 ] 

Tsz Wo Nicholas Sze commented on HDDS-383:
--

Thanks [~shashikant].

- Should the index be set to currentStreamIndex instead of 0?
{code}
+ListIterator streamEntryIterator =
+streamEntries.listIterator(currentStreamIndex);
+int index = 0;
{code}

- In both discardPreallocatedBlocks(..) and removeEmptyBlocks(), the index 
should not be increamented if remove(index) is called.

- How about combining locationInfoList and streamEntries into one list?

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602468#comment-16602468
 ] 

Hadoop QA commented on HDDS-98:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
34s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not 

[jira] [Commented] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602466#comment-16602466
 ] 

Hadoop QA commented on HDDS-315:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
25s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-315 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938172/HDDS-315.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c2e015dd35c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-372) There are two buffer copies in ChunkOutputStream

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602465#comment-16602465
 ] 

Hadoop QA commented on HDDS-372:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-372 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937703/HDDS-372.20180829.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/955/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> There are two buffer copies in ChunkOutputStream
> 
>
> Key: HDDS-372
> URL: https://issues.apache.org/jira/browse/HDDS-372
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-372.20180829.patch
>
>
> Currently, there are two buffer copies in ChunkOutputStream
> # from byte[] to ByteBuffer, and
> # from ByteBuffer to ByteString.
> We should eliminate the ByteBuffer in the middle.
> For zero copy io, we should support WritableByteChannel instead of 
> OutputStream.  It won't be done in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-372) There are two buffer copies in ChunkOutputStream

2018-09-03 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602464#comment-16602464
 ] 

Tsz Wo Nicholas Sze commented on HDDS-372:
--

Thanks [~shashikant].  Thanks for the review.  Will fix the bugs in the patch.


> There are two buffer copies in ChunkOutputStream
> 
>
> Key: HDDS-372
> URL: https://issues.apache.org/jira/browse/HDDS-372
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-372.20180829.patch
>
>
> Currently, there are two buffer copies in ChunkOutputStream
> # from byte[] to ByteBuffer, and
> # from ByteBuffer to ByteString.
> We should eliminate the ByteBuffer in the middle.
> For zero copy io, we should support WritableByteChannel instead of 
> OutputStream.  It won't be done in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-315:
---
Attachment: HDDS-315.002.patch
Status: Patch Available  (was: Open)

[~nandakumar131] - Thanks for reviewing. Attached new patch 002.

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch, HDDS-315.002.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-315:
---
Status: Open  (was: Patch Available)

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-03 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602434#comment-16602434
 ] 

Dinesh Chitlangia commented on HDDS-98:
---

[~anu] attached patch 008 with findbug fixes. 

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-03 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-98:
--
Status: Open  (was: Patch Available)

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-03 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-98:
--
Attachment: HDDS-98.008.patch
Status: Patch Available  (was: Open)

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602431#comment-16602431
 ] 

Hadoop QA commented on HDDS-398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestPipelineClose |
|   | hadoop.ozone.freon.TestRandomKeyGenerator |
|   | hadoop.ozone.freon.TestDataValidate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602429#comment-16602429
 ] 

Hadoop QA commented on HDFS-13365:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 13s{color} 
| {color:red} HDFS-13365 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918459/HDFS-13365.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24949/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, 
> HDFS-13365.003.patch, HDFS-13365.004.patch, HDFS-13365.005.patch, 
> HDFS-13365.006.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13576) RBF: Add destination path length validation for add/update mount entry

2018-09-03 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13576:

Parent Issue: HDFS-13891  (was: HDFS-12615)

> RBF: Add destination path length validation for add/update mount entry
> --
>
> Key: HDFS-13576
> URL: https://issues.apache.org/jira/browse/HDFS-13576
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Minor
>
> Currently there is no validation to check destination path length while 
> adding or updating mount entry. But while trying to create directory using 
> this mount entry 
> {noformat}
> RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
> is thrown with exception message as 
> {noformat}
> "maximum path component name limit of ... directory / is 
> exceeded: limit=255 length=1817"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-09-03 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13404:

Parent Issue: HDFS-13891  (was: HDFS-12615)

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602427#comment-16602427
 ] 

Brahma Reddy Battula commented on HDFS-13365:
-

do we require this now..? The HTrace incubator project has voted to retire 
itself and won't be making further releases.There is discussion HTrace removal 
from Hadoop ,see HADOOP-15566.

 

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, 
> HDFS-13365.003.patch, HDFS-13365.004.patch, HDFS-13365.005.patch, 
> HDFS-13365.006.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602424#comment-16602424
 ] 

Brahma Reddy Battula commented on HDFS-13655:
-

Logged umbrella HDFS-13891 to stabilisation the phase-1 work which we are 
targeting to 3.3 release.

bq.If needed, we can repeat this once that's done - if there are new misc. 
things that we don't feel can go into trunk, we create 'feature branch y' and 
set a new goal for it.

Sure, we can do, if there are more.

Discussed with [~elgoiri] same, [~elgoiri] please correct me,if I am wrong.

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) Über-jira: RBF stabilisation phase I

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602422#comment-16602422
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

As discussed with [~elgoiri] this umbrella is focus on 

I) Stabilisation of RBF and

ii) we'll unlink all the phase-11 jira's from HDFS-12615 and close that 
umbrella so that 3.2 release will be given with HDFS-12615.

iii) Improvements and new features can be pushed to security branch,if they are 
complex.

 

[~elgoiri] please correct me if I am wrong here.

 

> Über-jira: RBF stabilisation phase I  
> --
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Priority: Major
>
> RBF shipped in 3.0+ and 2.9..now its out various corner cases, scale and 
> error handling issues are surfacing. this umbrella to fix all those issues 
> before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602412#comment-16602412
 ] 

Anu Engineer edited comment on HDDS-395 at 9/3/18 7:23 PM:
---

{quote}{quote}And until now the HADOOP_CONF_DIR was not required for java code. 
It's an environment variable which is handled only by the starter scripts
{quote}{quote}
I might be misunderstanding this. If this variable is not set, we will not be 
able to find any config values, and since we have a set of user configured, yet 
required values that the cluster will fail to start. From that point of view, 
if you have an HDFS/Ozone cluster without this value being configured shouldn't 
we fail?

 
{quote}{quote} For example the Configuration class doesn't require it. (Hadoop 
itself could be started without setting HADOOP_CONF_DIR if the configuration 
files are added to the classpath.)
{quote}{quote}
I agree the that Configuration class does not require it. That is *probably* 
because it is a common class and used by many other applications in the Hadoop 
family. For Ozone this would pretty much translate to a fatal failure. Yes, we 
would fail with a indirect error message until now.

 
{quote}{color:#00}Just I am more permissive about the optional 
configuration.{color}
{quote}
{color:#00}I do agree with this. I see that you are saying that we should 
not be asserting this *required* but not *asserted* in code assertion in some 
other location and not in this class. That I agree with, there are probably 
much better places to assert this. Some more generic place like 
OzoneConfiguration class should make this assertion, than DBConfig from file. 
The only case for this case is that this is also a Config read operation – very 
similar to OzoneConfig, but this can survive without those values (optional 
config) but the fact remains that we (Ozone) has a hard coded dependency on 
HADOOP_CONF_PATH and if it is not configured, we will fail.{color}

 

 


was (Author: anu):
{quote}bq.And until now the HADOOP_CONF_DIR was not required for java code. 
It's an environment variable which is handled only by the starter scripts
{quote}
I might be misunderstanding this. If this variable is not set, we will not be 
able to find any config values, and since we have a set of user configured, yet 
required values that the cluster will fail to start. From that point of view, 
if you have an HDFS/Ozone cluster without this value being configured shouldn't 
we fail?

 
{quote}bq. For example the Configuration class doesn't require it. (Hadoop 
itself could be started without setting HADOOP_CONF_DIR if the configuration 
files are added to the classpath.)
{quote}
I agree the that Configuration class does not require it. That is *probably* 
because it is a common class and used by many other applications in the Hadoop 
family. For Ozone this would pretty much translate to a fatal failure. Yes, we 
would fail with a indirect error message until now.

 

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> 

[jira] [Updated] (HDFS-13891) Über-jira: RBF stabilisation phase I

2018-09-03 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13891:

Description: RBF shipped in 3.0+ and 2.9..now its out various corner cases, 
scale and error handling issues are surfacing. this umbrella to fix all those 
issues before next 3.3 release.  (was: RBF shipped in 3.0 and 2.9..now its out 
various corner cases, scale and error handling issues are surfacing. this 
umbrella to fix all those issues before next 3.3 release.)

> Über-jira: RBF stabilisation phase I  
> --
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Priority: Major
>
> RBF shipped in 3.0+ and 2.9..now its out various corner cases, scale and 
> error handling issues are surfacing. this umbrella to fix all those issues 
> before next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13891) Über-jira: RBF stabilisation phase I

2018-09-03 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-13891:
---

 Summary: Über-jira: RBF stabilisation phase I  
 Key: HDFS-13891
 URL: https://issues.apache.org/jira/browse/HDFS-13891
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Brahma Reddy Battula


RBF shipped in 3.0 and 2.9..now its out various corner cases, scale and error 
handling issues are surfacing. this umbrella to fix all those issues before 
next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602412#comment-16602412
 ] 

Anu Engineer commented on HDDS-395:
---

{quote}bq.And until now the HADOOP_CONF_DIR was not required for java code. 
It's an environment variable which is handled only by the starter scripts
{quote}
I might be misunderstanding this. If this variable is not set, we will not be 
able to find any config values, and since we have a set of user configured, yet 
required values that the cluster will fail to start. From that point of view, 
if you have an HDFS/Ozone cluster without this value being configured shouldn't 
we fail?

 
{quote}bq. For example the Configuration class doesn't require it. (Hadoop 
itself could be started without setting HADOOP_CONF_DIR if the configuration 
files are added to the classpath.)
{quote}
I agree the that Configuration class does not require it. That is *probably* 
because it is a common class and used by many other applications in the Hadoop 
family. For Ozone this would pretty much translate to a fatal failure. Yes, we 
would fail with a indirect error message until now.

 

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602409#comment-16602409
 ] 

Elek, Marton commented on HDDS-395:
---

Yes, I agree. But only for the *required* configuration keys. I can't see the 
benefit to throw an exception if an optional configuration file is missing. 
(And in this case we throw an exception if the directory of the optional file 
is missing. Right after the exception we would check if the conf file exists 
anyway).

And until now the HADOOP_CONF_DIR was not required for java code. It's an 
environment variable which is handled only by the starter scripts. For example 
the Configuration class doesn't require it. (Hadoop itself could be started 
without setting HADOOP_CONF_DIR if the configuration files are added to the 
classpath.)

But I am not against setting the java property in all the integration tests. 
Just I am more permissive about the optional configuration...

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602407#comment-16602407
 ] 

Hadoop QA commented on HDDS-75:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 24s{color} | {color:orange} root: The patch generated 3 new + 11 unchanged - 
0 fixed = 14 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
20s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker 

[jira] [Updated] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-398:
--
Attachment: HDDS-398.002.patch

> Support multiple tests in freon
> ---
>
> Key: HDDS-398
> URL: https://issues.apache.org/jira/browse/HDDS-398
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-398.001.patch, HDDS-398.002.patch
>
>
> Current freon supports only one kind of tests (creates volumes/buckets and 
> generates random keys).
> To ensure the correctness of ozone we need to use multiple and different kind 
> of tests (for example: test only ozone manager or just a datanode).
> In this patch I propose to use the picocli based simplified command line 
> which is introduced by HDDS-379 to make it easier to add more freon tests.
> This patch is just about the cli cleanup, more freon tests could be added in 
> following Jira where the progress calculation and metrics handling also could 
> be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602382#comment-16602382
 ] 

Anu Engineer commented on HDDS-395:
---

If you don't have Config value setup the right thing to do is to throw in case 
of a cluster deployment.  Unit tests is the only case where Hadoop_conf not 
being set is acceptable. But that can be easily remedied by setting the right 
value, even an invalid one.

 

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602381#comment-16602381
 ] 

Elek, Marton commented on HDDS-395:
---

What about to remove the exception DBConfigFromFile.getConfigLocation()? As 
this is used to check a totally optional file I don't think we need to throw an 
exception. I would use System.getProperty("user.dir") in case of 
HADOOP_CONF_DIR is unset.

An alternative approach is to check the config file from the classpath. 
HADOOP_CONF_DIR is added to the classpath by the starter scripts. We don't need 
to use this env from the java code.

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602362#comment-16602362
 ] 

Elek, Marton commented on HDDS-75:
--

Fair enough. I moved it to the {{KeyValueHandler.}}

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, 
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch, 
> HDDS-75.013.patch, HDDS-75.014.patch, HDDS-75.015.patch, 
> HDFS-11686-HDFS-7240.001.patch, HDFS-11686-HDFS-7240.002.patch, 
> HDFS-11686-HDFS-7240.003.patch, HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-75:
-
Attachment: HDDS-75.015.patch

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, 
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch, 
> HDDS-75.013.patch, HDDS-75.014.patch, HDDS-75.015.patch, 
> HDFS-11686-HDFS-7240.001.patch, HDFS-11686-HDFS-7240.002.patch, 
> HDFS-11686-HDFS-7240.003.patch, HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602360#comment-16602360
 ] 

Hadoop QA commented on HDDS-398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test hadoop-ozone/acceptance-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 58s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} acceptance-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.freon.TestRandomKeyGenerator |
|   | hadoop.ozone.freon.TestDataValidate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-398 |
| 

[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602322#comment-16602322
 ] 

Steve Loughran commented on HDFS-13713:
---

Ewan, here's my revision of the .md file; forgotten too much of the details of 
the notation to be confident it's good to go as is, but you can see what we can 
do for a model here: store the pending uploads as part of the FS state, and 
then use that in preconditions and postconditions. This is ~ the same as I did 
in the TLA+ spec fo HADOOP-13786, except more broadly readable.

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-13713:
--
Attachment: multipartuploader.md

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-395:
-

Assignee: Sree Vaddi

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Sree Vaddi
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-03 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602319#comment-16602319
 ] 

Ayush Saxena commented on HDFS-13862:
-

Thanx [~SoumyaPN] for raising the issue.
The logs were missing for safemode and nameservice were missing have added them.
The first part regarding missing the destination entry name in the add command 
logs feels like deliberate might be done to prevent the exposure of the actual 
location in the cluster.
Have Uploaded the patch with logs for other commands. 

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-03 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13862:

Attachment: HDFS-13862-01.patch

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-03 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13862:

Attachment: (was: HDFS-13862-01.patch)

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602315#comment-16602315
 ] 

Hadoop QA commented on HDDS-75:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 2 new + 6 unchanged - 
0 fixed = 8 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Updated] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-395:
--
Summary: TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB 
config"  (was: Datanode initialization fails with "Unable to read ROCKDB 
config")

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-03 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13862:

Attachment: HDFS-13862-01.patch

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt

2018-09-03 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13840:

Summary: RBW Blocks which are having less GS should be added to Corrupt  
(was: RBW Blocks which are having less GS shouldn't added on DN restart)

> RBW Blocks which are having less GS should be added to Corrupt
> --
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS shouldn't added on DN restart

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602305#comment-16602305
 ] 

Brahma Reddy Battula commented on HDFS-13840:
-

Test failure is unrelated, there was a Jira to track this test failure 
HDFS-9243.

> RBW Blocks which are having less GS shouldn't added on DN restart
> -
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-398:
--
Fix Version/s: 0.2.1
   Status: Patch Available  (was: Open)

Unfortunately TestFreon is failing on the trunk currently and it has multiple 
problem. I updated the unit test and now it's failing with the same error as 
the trunk. Will create multiple jira-s to fix freon test.

> Support multiple tests in freon
> ---
>
> Key: HDDS-398
> URL: https://issues.apache.org/jira/browse/HDDS-398
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-398.001.patch
>
>
> Current freon supports only one kind of tests (creates volumes/buckets and 
> generates random keys).
> To ensure the correctness of ozone we need to use multiple and different kind 
> of tests (for example: test only ozone manager or just a datanode).
> In this patch I propose to use the picocli based simplified command line 
> which is introduced by HDDS-379 to make it easier to add more freon tests.
> This patch is just about the cli cleanup, more freon tests could be added in 
> following Jira where the progress calculation and metrics handling also could 
> be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-398:
--
Attachment: HDDS-398.001.patch

> Support multiple tests in freon
> ---
>
> Key: HDDS-398
> URL: https://issues.apache.org/jira/browse/HDDS-398
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDDS-398.001.patch
>
>
> Current freon supports only one kind of tests (creates volumes/buckets and 
> generates random keys).
> To ensure the correctness of ozone we need to use multiple and different kind 
> of tests (for example: test only ozone manager or just a datanode).
> In this patch I propose to use the picocli based simplified command line 
> which is introduced by HDDS-379 to make it easier to add more freon tests.
> This patch is just about the cli cleanup, more freon tests could be added in 
> following Jira where the progress calculation and metrics handling also could 
> be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS shouldn't added on DN restart

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602290#comment-16602290
 ] 

Hadoop QA commented on HDFS-13840:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938135/HDFS-13840-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 83c0c03fa037 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 211034a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24948/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24948/testReport/ |
| Max. process+thread count | 3307 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24948/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602274#comment-16602274
 ] 

Nanda kumar commented on HDDS-75:
-


We already have a way to get the container handler for a given container type 
inside {{ReplicateContainerCommandHandler#handle}}. We just have to move 
{{importContainer}} logic into {{KeyValueHandler}} and add an abstract method 
in {{Handler}}.

{code}
CompletableFuture result =
tempTarFile.thenAccept(path -> {
  LOG.info("Container {} is downloaded, starting to import.",
  containerID);
  
try (FileInputStream tempContainerTarStream = new 
FileInputStream(path.toFile())) {
  byte[] containerDescriptorYaml =
  packer.unpackContainerDescriptor(tempContainerTarStream);
  ContainerData originalContainerData = ContainerDataYaml.readContainer(
  containerDescriptorYaml);

  Handler handler = container.getDispatcher().getHandler(
  originalContainerData.getContainerType());
  handler.importContainer(containerID, path);
} catch (Exception ex) {
  LOG.error(
  "Container import is failed and the downloaded file can't be "
  + "deleted: "
  + path.toAbsolutePath().toString());
}
});
{code}

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, 
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch, 
> HDDS-75.013.patch, HDDS-75.014.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2018-09-03 Thread Jepson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated HDFS-12936:
--
Description: 
I configure the max user processes 65535 with any user ,and the datanode memory 
is 8G.
 When a log of data was been writeen,the datanode was been shutdown.
 But I can see the memory use only < 1000M.
 Please to see the attachment. !Datanode Memory.png!

*DataNode shutdown error log:*
{code:java}
2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
PacketResponder: 
BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-12-17 23:58:31,425 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:01,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:05,520 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 src: 
/192.168.17.54:40478 dest: /192.168.17.48:50010

{code}

  was:
I configure the max user processes  65535 with any user ,and the datanode 
memory is 8G.
When a log of data was been writeen,the datanode was been shutdown.
But I can see the memory use only < 1000M.
Please to see https://pan.baidu.com/s/1o7BE0cy

*DataNode shutdown error log:*  
{code:java}
2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
PacketResponder: 
BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-12-17 23:58:31,425 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:01,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:05,520 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 src: 
/192.168.17.54:40478 dest: /192.168.17.48:50010

{code}






> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>Priority: Major
> Attachments: Datanode Memory.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes 65535 with any user ,and the datanode 
> memory is 8G.
>  When a 

[jira] [Updated] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread

2018-09-03 Thread Jepson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated HDFS-12936:
--
Attachment: Datanode Memory.png

> java.lang.OutOfMemoryError: unable to create new native thread
> --
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
>Reporter: Jepson
>Priority: Major
> Attachments: Datanode Memory.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode 
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> PacketResponder: 
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. 
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602263#comment-16602263
 ] 

Nanda kumar commented on HDDS-315:
--

[~dineshchitlangia], thanks for working on this. The patch is not applying 
after HDDS-357. Can you please rebase on top of latest changes on trunk?

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-315.001.patch
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13845) RBF: The default MountTableResolver should fail resolving multi-destination paths

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602258#comment-16602258
 ] 

Brahma Reddy Battula commented on HDFS-13845:
-

Patch lgtm. Apart from the check-style comments. Bytheway,It's nice catch.

 

FYI. HDFS-13857 might need to rebase after this committed.( if this goes first).

As both throws IOE on same method. 
{code:java}
+ public PathLocation lookupLocation(final String path) throws IOException { 
{code}
 

> RBF: The default MountTableResolver should fail resolving multi-destination 
> paths
> -
>
> Key: HDFS-13845
> URL: https://issues.apache.org/jira/browse/HDFS-13845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13845.001.patch, HDFS-13845.002.patch, 
> HDFS-13845.003.patch
>
>
> When we use the default MountTableResolver to resolve the path, we cannot get 
> the destination paths for the default DestinationOrder.HASH. 
> {code:java}
> // Some comments here
> private static PathLocation buildLocation(
>   ..
> List locations = new LinkedList<>();
> for (RemoteLocation oneDst : entry.getDestinations()) {
>   String nsId = oneDst.getNameserviceId();
>   String dest = oneDst.getDest();
>   String newPath = dest;
>   if (!newPath.endsWith(Path.SEPARATOR) && !remainingPath.isEmpty()) {
> newPath += Path.SEPARATOR;
>   }
>   newPath += remainingPath;
>   RemoteLocation remoteLocation = new RemoteLocation(nsId, newPath, path);
>   locations.add(remoteLocation);
> }
> DestinationOrder order = entry.getDestOrder();
> return new PathLocation(srcPath, locations, order);
>   }
> {code}
> The default order will be hash, but the HashFirstResolver will not be invoked 
> to order the location.
> It is ambiguous for the MountTableResolver that we will see the HASH order in 
> the web ui for multi-destinations path but we cannot get the result.
> In my opinion, the MountTableResolver will be a simple resolver to implement 
> 1 to 1 not including the 1 to n destinations. So we should check the 
> buildLocation. If the entry has multi destinations, we should reject it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-03 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602256#comment-16602256
 ] 

Lokesh Jain commented on HDDS-358:
--

[~anu] Can you please rebase the patch? 

The patch looks good to me. Please find my comments below.
 # KeyDeletingService - We can have the logs in KeyDeletingTask and can convert 
them to debug instead. We can have the log for the case when block deletion 
result from SCM is a failure. We can also have the log for the number of keys 
which are being deleted by the service.
 # We also need to start the KeyDeletingService and the block deletion tests. 
We can do it as part of separate Jira though.
 # OmMetadataManagerImpl:50 - Star import

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-343) Containers are stuck in closing state in scm

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602242#comment-16602242
 ] 

Nanda kumar commented on HDDS-343:
--

Thanks [~elek] for the contribution. I have committed this to trunk.

> Containers are stuck in closing state in scm
> 
>
> Key: HDDS-343
> URL: https://issues.apache.org/jira/browse/HDDS-343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-343.001.patch, HDDS-343.002.patch, 
> HDDS-343.003.patch, HDDS-343.004.patch, HDDS-343.005.patch, HDDS-343.006.patch
>
>
> Containers could not been closed currently.
> The datanode is closing the containers and sending the CLOSED state in the 
> container report but SCM doesn't register that the state is closed and 
> sending the close command again and again.
> I think the ContainerMapping.processContainerReport should be improved.
> {code}
> scm_1   | --> RPC message request: SCMHeartbeatRequestProto from 
> 172.25.0.2:33912
> scm_1   | datanodeDetails {
> scm_1   |   uuid: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   |   ipAddress: "172.25.0.2"
> scm_1   |   hostName: "365fd1f44f0b"
> scm_1   |   ports {
> scm_1   | name: "STANDALONE"
> scm_1   | value: 9859
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "RATIS"
> scm_1   | value: 9858
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "REST"
> scm_1   | value: 9880
> scm_1   |   }
> scm_1   | }
> scm_1   | nodeReport {
> scm_1   |   storageReport {
> scm_1   | storageUuid: "DS-61e76107-85c5-437a-95a7-aeb8b3e7827f"
> scm_1   | storageLocation: "/tmp/hadoop-hadoop/dfs/data"
> scm_1   | capacity: 491630870528
> scm_1   | scmUsed: 2708828160
> scm_1   | remaining: 24263614464
> scm_1   | storageType: DISK
> scm_1   | failed: false
> scm_1   |   }
> scm_1   | }
> scm_1   | containerReport {
> scm_1   |   reports {
> scm_1   | containerID: 1
> scm_1   | used: 1061158912
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1061158912
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 2
> scm_1   | used: 1048576000
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1048576000
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 3
> scm_1   | used: 511705088
> scm_1   | readCount: 0
> scm_1   | writeCount: 32
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 511705088
> scm_1   | state: OPEN
> scm_1   |   }
> scm_1   | }
> scm_1   | commandStatusReport {
> scm_1   | }
> scm_1   | containerActions {
> scm_1   |   containerActions {
> scm_1   | containerID: 1
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   |   containerActions {
> scm_1   | containerID: 2
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   | }
> scm_1   | 
> scm_1   | --> RPC message response: SCMHeartbeatRequestProto to 
> 172.25.0.2:33912
> scm_1   | datanodeUUID: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   | 
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 1
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 1 is in CLOSING state and need not be closed.
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 2
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 2 is in CLOSING state and need not be closed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDDS-343) Containers are stuck in closing state in scm

2018-09-03 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-343:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Containers are stuck in closing state in scm
> 
>
> Key: HDDS-343
> URL: https://issues.apache.org/jira/browse/HDDS-343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-343.001.patch, HDDS-343.002.patch, 
> HDDS-343.003.patch, HDDS-343.004.patch, HDDS-343.005.patch, HDDS-343.006.patch
>
>
> Containers could not been closed currently.
> The datanode is closing the containers and sending the CLOSED state in the 
> container report but SCM doesn't register that the state is closed and 
> sending the close command again and again.
> I think the ContainerMapping.processContainerReport should be improved.
> {code}
> scm_1   | --> RPC message request: SCMHeartbeatRequestProto from 
> 172.25.0.2:33912
> scm_1   | datanodeDetails {
> scm_1   |   uuid: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   |   ipAddress: "172.25.0.2"
> scm_1   |   hostName: "365fd1f44f0b"
> scm_1   |   ports {
> scm_1   | name: "STANDALONE"
> scm_1   | value: 9859
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "RATIS"
> scm_1   | value: 9858
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "REST"
> scm_1   | value: 9880
> scm_1   |   }
> scm_1   | }
> scm_1   | nodeReport {
> scm_1   |   storageReport {
> scm_1   | storageUuid: "DS-61e76107-85c5-437a-95a7-aeb8b3e7827f"
> scm_1   | storageLocation: "/tmp/hadoop-hadoop/dfs/data"
> scm_1   | capacity: 491630870528
> scm_1   | scmUsed: 2708828160
> scm_1   | remaining: 24263614464
> scm_1   | storageType: DISK
> scm_1   | failed: false
> scm_1   |   }
> scm_1   | }
> scm_1   | containerReport {
> scm_1   |   reports {
> scm_1   | containerID: 1
> scm_1   | used: 1061158912
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1061158912
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 2
> scm_1   | used: 1048576000
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1048576000
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 3
> scm_1   | used: 511705088
> scm_1   | readCount: 0
> scm_1   | writeCount: 32
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 511705088
> scm_1   | state: OPEN
> scm_1   |   }
> scm_1   | }
> scm_1   | commandStatusReport {
> scm_1   | }
> scm_1   | containerActions {
> scm_1   |   containerActions {
> scm_1   | containerID: 1
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   |   containerActions {
> scm_1   | containerID: 2
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   | }
> scm_1   | 
> scm_1   | --> RPC message response: SCMHeartbeatRequestProto to 
> 172.25.0.2:33912
> scm_1   | datanodeUUID: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   | 
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 1
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 1 is in CLOSING state and need not be closed.
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 2
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 2 is in CLOSING state and need not be closed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-343) Containers are stuck in closing state in scm

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602240#comment-16602240
 ] 

Nanda kumar commented on HDDS-343:
--

+1, LGTM. I will commit this shortly.

> Containers are stuck in closing state in scm
> 
>
> Key: HDDS-343
> URL: https://issues.apache.org/jira/browse/HDDS-343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-343.001.patch, HDDS-343.002.patch, 
> HDDS-343.003.patch, HDDS-343.004.patch, HDDS-343.005.patch, HDDS-343.006.patch
>
>
> Containers could not been closed currently.
> The datanode is closing the containers and sending the CLOSED state in the 
> container report but SCM doesn't register that the state is closed and 
> sending the close command again and again.
> I think the ContainerMapping.processContainerReport should be improved.
> {code}
> scm_1   | --> RPC message request: SCMHeartbeatRequestProto from 
> 172.25.0.2:33912
> scm_1   | datanodeDetails {
> scm_1   |   uuid: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   |   ipAddress: "172.25.0.2"
> scm_1   |   hostName: "365fd1f44f0b"
> scm_1   |   ports {
> scm_1   | name: "STANDALONE"
> scm_1   | value: 9859
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "RATIS"
> scm_1   | value: 9858
> scm_1   |   }
> scm_1   |   ports {
> scm_1   | name: "REST"
> scm_1   | value: 9880
> scm_1   |   }
> scm_1   | }
> scm_1   | nodeReport {
> scm_1   |   storageReport {
> scm_1   | storageUuid: "DS-61e76107-85c5-437a-95a7-aeb8b3e7827f"
> scm_1   | storageLocation: "/tmp/hadoop-hadoop/dfs/data"
> scm_1   | capacity: 491630870528
> scm_1   | scmUsed: 2708828160
> scm_1   | remaining: 24263614464
> scm_1   | storageType: DISK
> scm_1   | failed: false
> scm_1   |   }
> scm_1   | }
> scm_1   | containerReport {
> scm_1   |   reports {
> scm_1   | containerID: 1
> scm_1   | used: 1061158912
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1061158912
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 2
> scm_1   | used: 1048576000
> scm_1   | readCount: 0
> scm_1   | writeCount: 64
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 1048576000
> scm_1   | state: CLOSED
> scm_1   |   }
> scm_1   |   reports {
> scm_1   | containerID: 3
> scm_1   | used: 511705088
> scm_1   | readCount: 0
> scm_1   | writeCount: 32
> scm_1   | readBytes: 0
> scm_1   | writeBytes: 511705088
> scm_1   | state: OPEN
> scm_1   |   }
> scm_1   | }
> scm_1   | commandStatusReport {
> scm_1   | }
> scm_1   | containerActions {
> scm_1   |   containerActions {
> scm_1   | containerID: 1
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   |   containerActions {
> scm_1   | containerID: 2
> scm_1   | action: CLOSE
> scm_1   | reason: CONTAINER_FULL
> scm_1   |   }
> scm_1   | }
> scm_1   | 
> scm_1   | --> RPC message response: SCMHeartbeatRequestProto to 
> 172.25.0.2:33912
> scm_1   | datanodeUUID: "9c8f80bd-9424-4d74-99ef-a2bd58e66d7f"
> scm_1   | 
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 1
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 1 is in CLOSING state and need not be closed.
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:56 - 
> Close container Event triggered for container : 2
> scm_1   | 2018-08-08 16:22:51 INFO  CloseContainerEventHandler:105 - 
> container with id : 2 is in CLOSING state and need not be closed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-398:
-

 Summary: Support multiple tests in freon
 Key: HDDS-398
 URL: https://issues.apache.org/jira/browse/HDDS-398
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Elek, Marton
Assignee: Elek, Marton


Current freon supports only one kind of tests (creates volumes/buckets and 
generates random keys).

To ensure the correctness of ozone we need to use multiple and different kind 
of tests (for example: test only ozone manager or just a datanode).

In this patch I propose to use the picocli based simplified command line which 
is introduced by HDDS-379 to make it easier to add more freon tests.

This patch is just about the cli cleanup, more freon tests could be added in 
following Jira where the progress calculation and metrics handling also could 
be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602225#comment-16602225
 ] 

Nanda kumar commented on HDDS-358:
--

[~anu], the patch is not applying. Can you please rebase?

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-03 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-358:
-
Status: Open  (was: Patch Available)

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-397) Handle deletion for keys with no blocks

2018-09-03 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-397:


 Summary: Handle deletion for keys with no blocks
 Key: HDDS-397
 URL: https://issues.apache.org/jira/browse/HDDS-397
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-03 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-358:
-
Status: Patch Available  (was: Open)

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-357) Use DBStore and TableStore for OzoneManager non-background service

2018-09-03 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602220#comment-16602220
 ] 

Nanda kumar commented on HDDS-357:
--

Thanks [~anu] for the review and commit and also for fixing the checkstyle 
issues.

> Use DBStore and TableStore for OzoneManager non-background service
> --
>
> Key: HDDS-357
> URL: https://issues.apache.org/jira/browse/HDDS-357
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-357.000.patch, HDDS-357.001.patch, 
> HDDS-357.002.patch
>
>
> {{OzoneManager}} uses DBStore to store its metadata. HDDS-356 introduced new 
> implementation of RockDBStore which has support for ColumnFamily based 
> storage. This jira aims to make use of new RockDBStore implementation in 
> OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602197#comment-16602197
 ] 

Hadoop QA commented on HDDS-325:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 30s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 38s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 17s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.command.TestCommandStatusReportHandler |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602193#comment-16602193
 ] 

Hadoop QA commented on HDDS-75:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 30s{color} | {color:orange} root: The patch generated 2 new + 6 unchanged - 
0 fixed = 8 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} integration-test in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-13840) RBW Blocks which are having less GS shouldn't added on DN restart

2018-09-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602176#comment-16602176
 ] 

Brahma Reddy Battula commented on HDFS-13840:
-

[~surendrasingh] thanks for taking a look.

bq.. I think this should also be handled for regular report.

Yes, I missed this.

Uploaded the patch to handle all the scenario's.(Marking the block corrupt so 
that block's will deleted.)

> RBW Blocks which are having less GS shouldn't added on DN restart
> -
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13840) RBW Blocks which are having less GS shouldn't added on DN restart

2018-09-03 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13840:

Attachment: HDFS-13840-004.patch

> RBW Blocks which are having less GS shouldn't added on DN restart
> -
>
> Key: HDFS-13840
> URL: https://issues.apache.org/jira/browse/HDFS-13840
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13806) EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602140#comment-16602140
 ] 

Hadoop QA commented on HDFS-13806:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestFSImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13806 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938118/HDFS-13806-06.patch |
| Optional Tests |  dupname 

[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602136#comment-16602136
 ] 

Hadoop QA commented on HDDS-369:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938128/HDDS-369.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 67dddbd29ffd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 211034a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/948/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/948/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: 

[jira] [Updated] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster

2018-09-03 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13889:
-
Component/s: hdfs

> The hadoop3.x client have compatible problem with hadoop2.x cluster
> ---
>
> Key: HDFS-13889
> URL: https://issues.apache.org/jira/browse/HDFS-13889
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: luhuachao
>Priority: Critical
>
> when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 
> cluster,the appmaster will fail with 'java.lang.NumberFormatException: For 
> input string: "30s"' on config dfs.client.datanode-restart.timeout; As in 
> hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to 
> value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get 
> this value. Is it necessary to fix this problem?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-03 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602095#comment-16602095
 ] 

Ewan Higgs commented on HDFS-13713:
---

002
* Explanation of how the API should work.
* Use some backticks to format class names.

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-03 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13713:
--
Attachment: HDFS-13713.002.patch

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13890) Allow Delimited PB OIV tool to print out INodeReferences

2018-09-03 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602077#comment-16602077
 ] 

Adam Antal commented on HDFS-13890:
---

My first idea is that the Delimited output could be extended with an additional 
column: "Is snapshot?" having values Y or N. Rows with "Y" will contain the 
snapshots. The regular pieces of information could be acquired in the same way 
as a regular inode, while the "Is snapshot" and the path columns can be 
extracted from the PBImageTextWriter by:
 # not throwing exceptions when calling getParentPath() on snapshotted inode
 # keeping track of the ids by passing the list of INodeReferences (refIdList) 
from the visit function of PBImageTextWriter

[~xiaochen], I saw that you worked on the original issue. Could you please 
advise on this?

> Allow Delimited PB OIV tool to print out INodeReferences
> 
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-369:
--
Attachment: HDDS-369.006.patch

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-09-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602071#comment-16602071
 ] 

Elek, Marton commented on HDDS-75:
--

Thanks [~nandakumar131] the feedback. To be honest I am also not sure how it 
could be done without depending on the KeyValueContainer. Would be easier to 
see after the second container implementation.

In fact only three lines of the 
ReplicateContainerCommandHandler.importContainer depend from the 
KeyValueContainer (L144-150). All of the others are generic (read the 
descriptor, add container to the containerSet,...)

To make it more generic I would add a new createContainer method to the Handler 
and add a line to the container descriptor to define the current container 
type. With this method we can use Handler.getHandlerForContainerType (in 
reality we need a map with all the handlers in a cached map) to get the actual 
handler and create the containerdata/container by the handler.

But this modification is a bigger improvement and I would separate it from the 
current jira.

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, 
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch, 
> HDDS-75.013.patch, HDDS-75.014.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13890) Allow Delimited PB OIV tool to print out INodeReferences

2018-09-03 Thread Adam Antal (JIRA)
Adam Antal created HDFS-13890:
-

 Summary: Allow Delimited PB OIV tool to print out INodeReferences
 Key: HDFS-13890
 URL: https://issues.apache.org/jira/browse/HDFS-13890
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Adam Antal
Assignee: Adam Antal


HDFS-9721 added the possibility to process PB-based FSImages containing 
snapshots by simply ignoring them. 

Although the XML tool can provide information about the snapshots, the user may 
find helpful if this is shown within the Delimited output (in the Delimited 
format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster

2018-09-03 Thread luhuachao (JIRA)
luhuachao created HDFS-13889:


 Summary: The hadoop3.x client have compatible problem with 
hadoop2.x cluster
 Key: HDFS-13889
 URL: https://issues.apache.org/jira/browse/HDFS-13889
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: luhuachao


when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 
cluster,the appmaster will fail with 'java.lang.NumberFormatException: For 
input string: "30s"' on config dfs.client.datanode-restart.timeout; As in 
hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to 
value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get 
this value. Is it necessary to fix this problem?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-09-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-336:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1


Just committed to the trunk. Thank you very much [~GeLiXin] the contribution.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13816) dfs.getQuotaUsage() throws NPE on non-existent dir instead of FileNotFoundException

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602054#comment-16602054
 ] 

Hadoop QA commented on HDFS-13816:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937649/HDFS-13816-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 830782d66dab 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 780df90 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24945/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24945/testReport/ |
| Max. process+thread count | 3089 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24945/console |
| Powered by 

[jira] [Commented] (HDFS-13885) Improve debugging experience of dfsclient decrypts

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602038#comment-16602038
 ] 

Hadoop QA commented on HDFS-13885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938116/HDFS-13885.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb3712aac58b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3801436 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24946/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24946/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Improve debugging experience of 

[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602010#comment-16602010
 ] 

Hadoop QA commented on HDDS-369:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdds/server-scm: The patch generated 7 
new + 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937910/HDDS-369.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 07408156827f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3801436 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/945/artifact/out/diff-checkstyle-hadoop-hdds_server-scm.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/945/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/945/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDFS-13818) Extend OIV to detect FSImage corruption

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601980#comment-16601980
 ] 

Hadoop QA commented on HDFS-13818:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 47 unchanged - 6 fixed = 48 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938098/HDFS-13818.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8a93c2e06236 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 873ef8a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24941/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24941/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Resolved] (HDFS-13832) EC: No administrative command provided to delete an user-defined erasure coding policy

2018-09-03 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-13832.
-
Resolution: Duplicate

> EC: No administrative command provided to delete an user-defined erasure 
> coding policy
> --
>
> Key: HDFS-13832
> URL: https://issues.apache.org/jira/browse/HDFS-13832
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 node SUSE linux cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: Delete_ec_policy.PNG
>
>
> No administrative command provided to delete an user defined erasure coding 
> policy 
> Step : -
> ---
>  * Create a Directory
>  - Add 64 user-defined ec policies in the ID range of [64 to 127].Beyond that 
> system will not allow 
>  to add any more policy.
>  - Enable an ec policy and the set it to the directory.
>  - Disable the policy and check the state of the policy in -listPolicies
>  -If the ec policy is in disable state ,system will not allow you to set it 
> on any directory
>  -Remove the ec policy and check the state of the policy in -listPolicies.
>  Its just set the state as removed ,but the policy is still present in the 
> list.
>  -If the ec policy is in remove state,system will not allow you to set it on 
> any directory
>  - There is no difference between disable and remove state.
>  -After adding 64 user-defined ec policies ,if an user wants to delete a 
> policy which is not usable any more or not correctly added instead of that 
> wants to add a new desired user-defined ec policy ,it can not be possible as 
> no delete option is provided.Only remove policy option is given,which is not 
> removing an user-defined policy,only set the policy state as removed.
> Actual ouput :-
>  
>  No administrative command provided to delete an user defined erasure coding 
> policy.With "-removePolicy" we can set a policy state as removed,we cann't 
> delete the user-defined ec policy.After adding 64 user-defined ec policies,if 
> a user wants to delete an policy and add a new desired policy,there is no 
> administrative provision provided to perform this operation.
>  
>  Expected output :-
>  
>  Either "-removePolicy" should remove the user-defined ec policy ,instead of 
> changing the policy state to removed only or administrative privilege should 
> be provided to delete an user-defined ec policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >