[jira] [Assigned] (HDFS-13154) Webhdfs : update the Document for allow/disallow snapshots
[ https://issues.apache.org/jira/browse/HDFS-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani reassigned HDFS-13154: --- Assignee: usharani > Webhdfs : update the Document for allow/disallow snapshots > -- > > Key: HDFS-13154 > URL: https://issues.apache.org/jira/browse/HDFS-13154 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, webhdfs >Affects Versions: 2.8.2 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > > There is no Document for Allow/Disallow snapshots. > http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-hdfs/WebHDFS.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12716: Status: Patch Available (was: Open) > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: usharani > Attachments: HDFS-12716.patch > > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16316199#comment-16316199 ] usharani edited comment on HDFS-12716 at 1/8/18 12:23 PM: -- Uploaded Patch...Please kindly Review. was (Author: peruguusha): Uploaded Patch... > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: usharani > Attachments: HDFS-12716.patch > > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12716: Attachment: HDFS-12716.patch Uploaded Patch... > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: usharani > Attachments: HDFS-12716.patch > > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287141#comment-16287141 ] usharani edited comment on HDFS-12833 at 12/12/17 5:31 AM: --- [~surendrasingh] Attached patch in branch2. was (Author: peruguusha): [~surendrasingh] Thanks for reporting.Uploaded patch in branch2. > Distcp : Update the usage of delete option for dependency with update and > overwrite option > -- > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, > HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287141#comment-16287141 ] usharani commented on HDFS-12833: - [~surendrasingh] Thanks for reporting.Uploaded patch in branch2. > Distcp : Update the usage of delete option for dependency with update and > overwrite option > -- > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, > HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12833) Distcp : Update the usage of delete option for dependency with update and overwrite option
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12833: Attachment: HDFS-12833-branch-2.001.patch > Distcp : Update the usage of delete option for dependency with update and > overwrite option > -- > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833-branch-2.001.patch, HDFS-12833.001.patch, > HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12833) In Distcp, Delete option not having the proper usage message.
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12833: Attachment: HDFS-12833.001.patch Thanks [~surendrasingh] for review.. Attached updated patch. Please review.. > In Distcp, Delete option not having the proper usage message. > - > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833.001.patch, HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12833) In Distcp, Delete option not having the proper usage message.
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16264939#comment-16264939 ] usharani edited comment on HDFS-12833 at 11/24/17 8:40 AM: --- Harshakiran Reddy thanks for reporting... It make sense to fix this.. Uploaded the patch..Kindly review was (Author: peruguusha): Harshakiran Reddy thanks for reporting... It make sense fix this issueplease review.. > In Distcp, Delete option not having the proper usage message. > - > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12833) In Distcp, Delete option not having the proper usage message.
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12833: Status: Patch Available (was: Open) > In Distcp, Delete option not having the proper usage message. > - > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12833) In Distcp, Delete option not having the proper usage message.
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12833: Attachment: HDFS-12833.patch Harshakiran Reddy thanks for reporting... It make sense fix this issueplease review.. > In Distcp, Delete option not having the proper usage message. > - > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12833.patch > > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262252#comment-16262252 ] usharani commented on HDFS-12826: - [~vagarychen] thanks for taking a look. bq. changing ipc to rpc instead? Currently it's documented with {{rpc}} which can confuse as there is no configuration for DN. So changing to {{ipc}} which will be more clear. > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12833) In Distcp, Delete option not having the proper usage message.
[ https://issues.apache.org/jira/browse/HDFS-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani reassigned HDFS-12833: --- Assignee: usharani > In Distcp, Delete option not having the proper usage message. > - > > Key: HDFS-12833 > URL: https://issues.apache.org/jira/browse/HDFS-12833 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp, hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > > Basically Delete option applicable only with update or overwrite options. I > tried as per usage message am getting the bellow exception. > {noformat} > bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5 > 2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments: > java.lang.IllegalArgumentException: Delete missing is applicable only with > update or overwrite options > at > org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528) > at > org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487) > at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:141) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:432) > Invalid arguments: Delete missing is applicable only with update or overwrite > options > usage: distcp OPTIONS [source_path...] > OPTIONS > -append Reuse existing data in target files and >append new data to them if possible > -asyncShould distcp execution be blocking > -atomic Commit all changes or none > -bandwidth Specify bandwidth per map in MB, accepts >bandwidth as a fraction. > -blocksperchunk If set to a positive value, fileswith more >blocks than this value will be split into >chunks of blocks to be >transferred in parallel, and reassembled on >the destination. By default, > is 0 and the files will be >transmitted in their entirety without >splitting. This switch is only applicable >when the source file system implements >getBlockLocations method and the target >file system implements concat method > -copybuffersize Size of the copy buffer to use. By default > is 8192B. > -delete Delete from target, files missing in source > -diffUse snapshot diff report to identify the >difference between source and target > {noformat} > Even in Document also it's not updated proper usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12826: Attachment: HDFS-12826.patch [~Harsha1206] thanks for reporting. It make sense to fix this.. Uploaded the patch..Kindly review. > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani updated HDFS-12826: Status: Patch Available (was: Open) > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > Attachments: HDFS-12826.patch > > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.
[ https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani reassigned HDFS-12826: --- Assignee: usharani > Document Saying the RPC port, But it's required IPC port in Balancer Document. > -- > > Key: HDFS-12826 > URL: https://issues.apache.org/jira/browse/HDFS-12826 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, documentation >Affects Versions: 3.0.0-beta1 >Reporter: Harshakiran Reddy >Assignee: usharani >Priority: Minor > > In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes > command required IPC port but in Documentation it's saying the RPC port. > http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer > {noformat} > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:65110 > refreshNamenodes: Unknown protocol: > org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol > bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes > Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port] > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin > -refreshNamenodes host-name:50077 > bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.
[ https://issues.apache.org/jira/browse/HDFS-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255134#comment-16255134 ] usharani commented on HDFS-12825: - [~Harsha1206] thanks for reporting. [~gabor.bota] Could you please assign me ,I already have a patch..? > After Block Corrupted, FSCK Report printing the Direct configuration. > --- > > Key: HDFS-12825 > URL: https://issues.apache.org/jira/browse/HDFS-12825 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Harshakiran Reddy >Assignee: Gabor Bota >Priority: Minor > Labels: newbie > Attachments: error.JPG > > > Scenario: > Corrupt the Block in any datanode > Take the *FSCK *Report for that file. > Actual Output: > == > printing the direct configuration in fsck report > {{dfs.namenode.replication.min}} > Expected Output: > > it should be {{MINIMAL BLOCK REPLICATION}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] usharani reassigned HDFS-12716: --- Assignee: usharani > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: usharani > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
usharani created HDFS-12716: --- Summary: 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available Key: HDFS-12716 URL: https://issues.apache.org/jira/browse/HDFS-12716 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: usharani Currently 'dfs.datanode.failed.volumes.tolerated' supports number of tolerated failed volumes to be mentioned. This configuration change requires restart of datanode. Since datanode volumes can be changed dynamically, keeping this configuration same for all may not be good idea. Support 'dfs.datanode.failed.volumes.tolerated' to accept special 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org