[jira] [Created] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13893:


 Summary: DiskBalancer: no validations for Disk balancer commands 
 Key: HDFS-13893
 URL: https://issues.apache.org/jira/browse/HDFS-13893
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{Scenario:-}}

 
 1 Run the Disk Balancer commands with extra arguments passing  

{noformat} 
hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
*sgfsdgfs*
2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
hostname:50077
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
hostname:50077 took 23 ms
2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
2018-08-31 14:57:35,457 INFO command.Command: 
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
Writing plan to:
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
{noformat} 

Expected Output:- 
=
Disk balancer commands should be fail if we pass any invalid arguments or extra 
arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13892) Disk Balancer : Invalid exit code for disk balancer execute command

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13892:


 Summary: Disk Balancer : Invalid exit code for disk balancer 
execute command
 Key: HDFS-13892
 URL: https://issues.apache.org/jira/browse/HDFS-13892
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{scenario:-}}

1. Write some 5GB data with one DISK
2. Add one more non-empty Disk to above Datanode 
3.Run the plan command for the above specific datanode 
4. run the Execute command with the above plan file
the above execute command not happened as per the datanode log 
{noformat}
ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination volume: 
file:/Test_Disk/DISK2/ does not have enough space to accommodate a block. Block 
Size: 268435456 Exiting from copyBlocks.
{noformat}
5. see the exit code for execute command, it display the 0

{{Expected Result :-}}

1. Exit code should be 1 why means execute command was not happened 
2. In this type of scenario In console print the that error message that time 
customer/user knows execute was not happened. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-399) Handle pipeline discovery on SCM restart.

2018-09-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-399:
--

 Summary: Handle pipeline discovery on SCM restart.
 Key: HDDS-399
 URL: https://issues.apache.org/jira/browse/HDDS-399
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


On SCM restart, as part on node registration, SCM should find out the list on 
open pipeline on the node. Once all the nodes of the pipeline have reported 
back, they should be added as active pipelines for further allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13891) Über-jira: RBF stabilisation phase I

2018-09-03 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-13891:
---

 Summary: Über-jira: RBF stabilisation phase I  
 Key: HDFS-13891
 URL: https://issues.apache.org/jira/browse/HDFS-13891
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Brahma Reddy Battula


RBF shipped in 3.0 and 2.9..now its out various corner cases, scale and error 
handling issues are surfacing. this umbrella to fix all those issues before 
next 3.3 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-398) Support multiple tests in freon

2018-09-03 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-398:
-

 Summary: Support multiple tests in freon
 Key: HDDS-398
 URL: https://issues.apache.org/jira/browse/HDDS-398
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Elek, Marton
Assignee: Elek, Marton


Current freon supports only one kind of tests (creates volumes/buckets and 
generates random keys).

To ensure the correctness of ozone we need to use multiple and different kind 
of tests (for example: test only ozone manager or just a datanode).

In this patch I propose to use the picocli based simplified command line which 
is introduced by HDDS-379 to make it easier to add more freon tests.

This patch is just about the cli cleanup, more freon tests could be added in 
following Jira where the progress calculation and metrics handling also could 
be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-397) Handle deletion for keys with no blocks

2018-09-03 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-397:


 Summary: Handle deletion for keys with no blocks
 Key: HDDS-397
 URL: https://issues.apache.org/jira/browse/HDDS-397
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13890) Allow Delimited PB OIV tool to print out INodeReferences

2018-09-03 Thread Adam Antal (JIRA)
Adam Antal created HDFS-13890:
-

 Summary: Allow Delimited PB OIV tool to print out INodeReferences
 Key: HDFS-13890
 URL: https://issues.apache.org/jira/browse/HDFS-13890
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Adam Antal
Assignee: Adam Antal


HDFS-9721 added the possibility to process PB-based FSImages containing 
snapshots by simply ignoring them. 

Although the XML tool can provide information about the snapshots, the user may 
find helpful if this is shown within the Delimited output (in the Delimited 
format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster

2018-09-03 Thread luhuachao (JIRA)
luhuachao created HDFS-13889:


 Summary: The hadoop3.x client have compatible problem with 
hadoop2.x cluster
 Key: HDFS-13889
 URL: https://issues.apache.org/jira/browse/HDFS-13889
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: luhuachao


when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 
cluster,the appmaster will fail with 'java.lang.NumberFormatException: For 
input string: "30s"' on config dfs.client.datanode-restart.timeout; As in 
hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to 
value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get 
this value. Is it necessary to fix this problem?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13832) EC: No administrative command provided to delete an user-defined erasure coding policy

2018-09-03 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-13832.
-
Resolution: Duplicate

> EC: No administrative command provided to delete an user-defined erasure 
> coding policy
> --
>
> Key: HDFS-13832
> URL: https://issues.apache.org/jira/browse/HDFS-13832
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 node SUSE linux cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: Delete_ec_policy.PNG
>
>
> No administrative command provided to delete an user defined erasure coding 
> policy 
> Step : -
> ---
>  * Create a Directory
>  - Add 64 user-defined ec policies in the ID range of [64 to 127].Beyond that 
> system will not allow 
>  to add any more policy.
>  - Enable an ec policy and the set it to the directory.
>  - Disable the policy and check the state of the policy in -listPolicies
>  -If the ec policy is in disable state ,system will not allow you to set it 
> on any directory
>  -Remove the ec policy and check the state of the policy in -listPolicies.
>  Its just set the state as removed ,but the policy is still present in the 
> list.
>  -If the ec policy is in remove state,system will not allow you to set it on 
> any directory
>  - There is no difference between disable and remove state.
>  -After adding 64 user-defined ec policies ,if an user wants to delete a 
> policy which is not usable any more or not correctly added instead of that 
> wants to add a new desired user-defined ec policy ,it can not be possible as 
> no delete option is provided.Only remove policy option is given,which is not 
> removing an user-defined policy,only set the policy state as removed.
> Actual ouput :-
>  
>  No administrative command provided to delete an user defined erasure coding 
> policy.With "-removePolicy" we can set a policy state as removed,we cann't 
> delete the user-defined ec policy.After adding 64 user-defined ec policies,if 
> a user wants to delete an policy and add a new desired policy,there is no 
> administrative provision provided to perform this operation.
>  
>  Expected output :-
>  
>  Either "-removePolicy" should remove the user-defined ec policy ,instead of 
> changing the policy state to removed only or administrative privilege should 
> be provided to delete an user-defined ec policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org