[jira] [Created] (HDFS-14166) HDFS : ls -e -R command is not giving the result in proper format
Soumyapn created HDFS-14166: --- Summary: HDFS : ls -e -R command is not giving the result in proper format Key: HDFS-14166 URL: https://issues.apache.org/jira/browse/HDFS-14166 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn Attachments: image-2018-12-21-15-51-10-505.png Test Scenario: 1. Write few files to one Directory. 2. Write few more files to the directory for which erasure coding policy is set. 3. Execute hdfs dfs -ls -e -R command to the parent folder containing both the above folders Expected result: LS command should give the output in proper format. Actual Result: Output is not formatted. !image-2018-12-21-15-51-10-505.png|thumbnail! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13966) HDFS DFSADMIN : Direct exception is coming for the refreshServiceAcl command if we disable the property hadoop.security.authorization
Soumyapn created HDFS-13966: --- Summary: HDFS DFSADMIN : Direct exception is coming for the refreshServiceAcl command if we disable the property hadoop.security.authorization Key: HDFS-13966 URL: https://issues.apache.org/jira/browse/HDFS-13966 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn Scenario: If we execute the hdfs dfsadmin -refreshServiceAcl command when the hadoop.security.authorization property is disabled in both the namenodes in HA cluster. Expected Result: Direct exception should not come. Class exception should be logged in the log file. Console message should be like "Service level authorization not enabled" Actual Result: Direct exception is coming. refreshServiceAcl: 2 exceptions [org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Service Level Authorization not enabled! , org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Service Level Authorization not enabled! ] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13917) RBF: Successfully updated mount point message is coming if we update the mount entry by passing the nameservice id for which mount entry is not present
Soumyapn created HDFS-13917: --- Summary: RBF: Successfully updated mount point message is coming if we update the mount entry by passing the nameservice id for which mount entry is not present Key: HDFS-13917 URL: https://issues.apache.org/jira/browse/HDFS-13917 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn *Test Steps:* # Add the mount entry like "hdfs dfsrouteradmin -add /apps *{color:#14892c}hacluster{color}* /opt # Update the mount entry by giving {color:#14892c}*hacluster2 "*{color:#33}hdfs dfsrouteradmin -update /apps *{color:#14892c}hacluster2{color}* /opt -readonly.{color}{color} *{color:#14892c}{color:#33}Actual Result:{color}{color}* {color:#14892c}{color:#33}Console message says "Successfully updated mount entry for /apps"{color}{color} *{color:#14892c}{color:#33}Expected Result:{color}{color}* {color:#14892c}{color:#33}This console message will be confusing and the user will be in a impression that the mount entry is readonly updated. But we have passed the nameservice for which the mount entry is not present.{color}{color} {color:#14892c}{color:#33}Console message can be like *"There is no entries for the hacluster2 nameservice"* so that the user will have proper message on the update command executed.{color}{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13906) RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands
Soumyapn created HDFS-13906: --- Summary: RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands Key: HDFS-13906 URL: https://issues.apache.org/jira/browse/HDFS-13906 Project: Hadoop HDFS Issue Type: Improvement Components: federation Reporter: Soumyapn Currently we have option to delete only one mount entry at once. If we have multiple mount entries, then it would be difficult for the user to execute the command for N number of times. Better If the "rm" and "clrQuota" command supports multiple entries, then It would be easy for the user to provide all the required entries in one single command. Namenode is already suporting "rm" and "clrQuota" with multiple destinations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands
Soumyapn created HDFS-13862: --- Summary: RBF: Router logs are not capturing few of the dfsrouteradmin commands Key: HDFS-13862 URL: https://issues.apache.org/jira/browse/HDFS-13862 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn Test Steps : Below commands are not getting captured in the Router logs. # Destination entry name in the add command. Log says "Added new mount point /apps9 to resolver". # Safemode enter|leave|get commands # nameservice enable -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13858) RBF: dfsrouteradmin safemode command is accepting any valid/invalid second argument. Add check to have single valid argument to safemode command
Soumyapn created HDFS-13858: --- Summary: RBF: dfsrouteradmin safemode command is accepting any valid/invalid second argument. Add check to have single valid argument to safemode command Key: HDFS-13858 URL: https://issues.apache.org/jira/browse/HDFS-13858 Project: Hadoop HDFS Issue Type: Bug Components: federation Reporter: Soumyapn *Scenario:* Current behaviour for the dfsrouteradmin command is: First argument should be valid one. What ever value we give as the second argument, the command is successfull. *Examples:* hdfs dfsrouteradmin -safemode enter leave hdfs dfsrouteradmin -safemode leave enter hdfs dfsrouteradmin -safemode get jashfuesfhsk hdfs dfsrouteradmin -safemode leave leave With the above examples, command is successfull for the first argument. *Expected:* Add check to have single valid argument to the safemode command -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13843) RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI
Soumyapn created HDFS-13843: --- Summary: RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI Key: HDFS-13843 URL: https://issues.apache.org/jira/browse/HDFS-13843 Project: Hadoop HDFS Issue Type: Bug Components: federation Reporter: Soumyapn *Scenario:* Execute the below add/update command for single mount entry for single nameservice pointing to multiple destinations. # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1 # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1,/tmp2,/tmp3 # hdfs dfsrouteradmin -update /apps1 hacluster /tmp1,/tmp2,/tmp3 -order RANDOM *Actual*. With the above commands, mount entry is successfully updated. But order information like HASH, RANDOM is not displayed in mount entries and also not displayed in federation router UI. However order information is updated properly when there are multiple nameservices. This issue is with single nameservice having multiple destinations. *Expected:* *Order information should be updated in mount entries so that the user will come to know which order has been set.* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13842) RBF : Exceptions are conflicting If we try to create the same mount entry once again
Soumyapn created HDFS-13842: --- Summary: RBF : Exceptions are conflicting If we try to create the same mount entry once again Key: HDFS-13842 URL: https://issues.apache.org/jira/browse/HDFS-13842 Project: Hadoop HDFS Issue Type: Bug Reporter: Soumyapn Test Steps: # Execute the command : hdfs dfsrouteradmin -add /apps7 hacluster /tmp7 # Execute the same command once again. Expected Result: User should get the message saying already the mount entry is present. Actual Result: console message is displayed like below. "Cannot add destination at hacluster /tmp7 Successfully added mount point /apps7" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13839) Add order information in dfsrouteradmin "-ls" command
Soumyapn created HDFS-13839: --- Summary: Add order information in dfsrouteradmin "-ls" command Key: HDFS-13839 URL: https://issues.apache.org/jira/browse/HDFS-13839 Project: Hadoop HDFS Issue Type: Bug Components: federation Reporter: Soumyapn Scenario: If we execute the hdfs dfsrouteradmin -ls command, order information is not present. Example: ./hdfs dfsrouteradmin -ls /apps1 With the above command: Source, Destinations, Owner, Group, Mode,Quota/Usage information is displayed. But there is no "order" information displayed with the "ls" command Expected: order information should be displayed with the -ls command to know the order set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13824) Number of Dead nodes is not showing in the Overview and Subclusters pages. However Live nodes are relecting properly
Soumyapn created HDFS-13824: --- Summary: Number of Dead nodes is not showing in the Overview and Subclusters pages. However Live nodes are relecting properly Key: HDFS-13824 URL: https://issues.apache.org/jira/browse/HDFS-13824 Project: Hadoop HDFS Issue Type: Bug Components: federation Affects Versions: 3.1.0 Reporter: Soumyapn Attachments: image-2018-08-14-11-47-05-025.png Scenario: Suppose we have 2 nameservices with 3 Datanodes each. If we make 2 DN's down, then the Datanodes page, Live nodes field in Overview and Live in Subclusters page is reflected to 4. But the Deadnodes field in Overview and Subclusters page is showing as 0. It is not reflected. !image-2018-08-14-11-47-05-025.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13815) No check being done on order command. It says successfully updated mount table If we dont specify order command and it is updated in mount table
Soumyapn created HDFS-13815: --- Summary: No check being done on order command. It says successfully updated mount table If we dont specify order command and it is updated in mount table Key: HDFS-13815 URL: https://issues.apache.org/jira/browse/HDFS-13815 Project: Hadoop HDFS Issue Type: Bug Components: federation Affects Versions: 3.0.0 Reporter: Soumyapn Execute the dfsrouter update command with the below scenarios. 1. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 RANDOM 2. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -or RANDOM 3. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -ord RANDOM 4. ./hdfs dfsrouteradmin -update /apps3 hacluster,ns2 /tmp6 -orde RANDOM The console message says, Successfully updated mount point. But it is not updated in the mount table. Expected Result: Exception on console as the order command is missing/not written properl -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13732) Erasure Coding policy name is not coming when the new policy is set
Soumyapn created HDFS-13732: --- Summary: Erasure Coding policy name is not coming when the new policy is set Key: HDFS-13732 URL: https://issues.apache.org/jira/browse/HDFS-13732 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.0.0 Reporter: Soumyapn Fix For: 3.1.0 Attachments: EC_Policy.PNG Scenerio: If the new policy apart from the default EC policy is set for the HDFS directory, then the console message is coming as "Set default erasure coding policy on " Expected output: It would be good If the EC policy name is displayed when the policy is set... Actual output: Set default erasure coding policy on -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org