[jira] [Updated] (HDFS-15346) RBF: Balance data across federation namespaces with DistCp and snapshot diff / Step 2: The DistCpFedBalance.

2020-05-26 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15346:
---
Attachment: HDFS-15346.004.patch

> RBF: Balance data across federation namespaces with DistCp and snapshot diff 
> / Step 2: The DistCpFedBalance.
> 
>
> Key: HDFS-15346
> URL: https://issues.apache.org/jira/browse/HDFS-15346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15346.001.patch, HDFS-15346.002.patch, 
> HDFS-15346.003.patch, HDFS-15346.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the second one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15346) RBF: Balance data across federation namespaces with DistCp and snapshot diff / Step 2: The DistCpFedBalance.

2020-05-26 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117386#comment-17117386
 ] 

Jinglun commented on HDFS-15346:


Refactor the command line options. Using CommandLineParser. Upload v04.

> RBF: Balance data across federation namespaces with DistCp and snapshot diff 
> / Step 2: The DistCpFedBalance.
> 
>
> Key: HDFS-15346
> URL: https://issues.apache.org/jira/browse/HDFS-15346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15346.001.patch, HDFS-15346.002.patch, 
> HDFS-15346.003.patch, HDFS-15346.004.patch
>
>
> Patch in HDFS-15294 is too big to review so we split it into 2 patches. This 
> is the second one. Detail can be found at HDFS-15294.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15377) BlockScanner scans one part per round, expect full scans after several rounds

2020-05-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15377:

Attachment: HDFS-15377.001.path
Status: Patch Available  (was: Open)

> BlockScanner scans one part per round, expect full scans after several rounds
> -
>
> Key: HDFS-15377
> URL: https://issues.apache.org/jira/browse/HDFS-15377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15377.001.path
>
>
> For reducing disk IO, one block is separated to multiple parts, BlockScanner 
> scans only one part per round. Expect that after several rounds, the full 
> block should be scanned
> Add a new option "dfs.block.scanner.part.size". the maximum data size per 
> scan by the block scanner. this value should be the multiple of chunk size, 
> for example, 512, 1024, 4096 ...
>  Default value is -1, will disable partial scan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15377) BlockScanner scans one part per round, expect full scans after several rounds

2020-05-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15377:

Description: 
For reducing disk IO, one block is separated to multiple parts, BlockScanner 
scans only one part per round. Expect that after several rounds, the full block 
should be scanned

Add a new option "dfs.block.scanner.part.size". the maximum data size per scan 
by the block scanner. this value should be the multiple of chunk size, for 
example, 512, 1024, 4096 ...
 Default value is -1, will disable partial scan.

  was:For reducing disk IO, one block is separated to multiple parts, 
BlockScanner scans only one part per round. Expect that after several rounds, 
the full block should be scanned. 


> BlockScanner scans one part per round, expect full scans after several rounds
> -
>
> Key: HDFS-15377
> URL: https://issues.apache.org/jira/browse/HDFS-15377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
>
> For reducing disk IO, one block is separated to multiple parts, BlockScanner 
> scans only one part per round. Expect that after several rounds, the full 
> block should be scanned
> Add a new option "dfs.block.scanner.part.size". the maximum data size per 
> scan by the block scanner. this value should be the multiple of chunk size, 
> for example, 512, 1024, 4096 ...
>  Default value is -1, will disable partial scan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15377) BlockScanner scans one part per round, expect full scans after several rounds

2020-05-26 Thread Yang Yun (Jira)
Yang Yun created HDFS-15377:
---

 Summary: BlockScanner scans one part per round, expect full scans 
after several rounds
 Key: HDFS-15377
 URL: https://issues.apache.org/jira/browse/HDFS-15377
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yang Yun
Assignee: Yang Yun


For reducing disk IO, one block is separated to multiple parts, BlockScanner 
scans only one part per round. Expect that after several rounds, the full block 
should be scanned. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13274) RBF: Extend RouterRpcClient to use multiple sockets

2020-05-26 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117248#comment-17117248
 ] 

Janus Chow commented on HDFS-13274:
---

The stats are as follows:
{code:java}
"GetBlockLocationsNumOps" : 233328385,
"GetBlockLocationsAvgTime" : 875.613523270563,
"FileAlreadyExistsExceptionNumOps" : 129,
"FileAlreadyExistsExceptionAvgTime" : 155.0,
"FileNotFoundExceptionNumOps" : 5259,
"FileNotFoundExceptionAvgTime" : 245.0,
"DeleteNumOps" : 749223,
"DeleteAvgTime" : 159.27272727272728,
"GetServerDefaultsNumOps" : 816182,
"GetServerDefaultsAvgTime" : 948.9306930693069,
"SetOwnerNumOps" : 13119,
"SetOwnerAvgTime" : 11.0,
"ReportBadBlocksNumOps" : 8,
"ReportBadBlocksAvgTime" : 1.0,
"FsyncNumOps" : 76351,
"FsyncAvgTime" : 1.0,
"GetAdditionalDatanodeNumOps" : 6,
"GetAdditionalDatanodeAvgTime" : 1.0,
"AddBlockNumOps" : 1472370,
"AddBlockAvgTime" : 521.6969696969697,
"CreateNumOps" : 1490114,
"CreateAvgTime" : 698.925925925926,
"SetPermissionNumOps" : 94423,
"SetPermissionAvgTime" : 140.66,
"UpdateBlockForPipelineNumOps" : 19,
"UpdateBlockForPipelineAvgTime" : 41.0,
"GetEZForPathNumOps" : 25486,
"GetEZForPathAvgTime" : 93.0,
"AlreadyBeingCreatedExceptionNumOps" : 20,
"AlreadyBeingCreatedExceptionAvgTime" : 1731.0,
"GetContentSummaryNumOps" : 20793,
"GetContentSummaryAvgTime" : 1020.0,
"AccessControlExceptionNumOps" : 107,
"AccessControlExceptionAvgTime" : 23.0,
"GetListingNumOps" : 24382314,
"GetListingAvgTime" : 502.0066225165563,
"IOExceptionNumOps" : 6,
"IOExceptionAvgTime" : 446.0,
"RecoveryInProgressExceptionNumOps" : 3,
"RecoveryInProgressExceptionAvgTime" : 6.0,
"SetReplicationNumOps" : 70334,
"SetReplicationAvgTime" : 97.5,
"CheckAccessNumOps" : 3862975,
"CheckAccessAvgTime" : 37.300970873786405,
"Rename2NumOps" : 7422,
"Rename2AvgTime" : 226.0,
"AbandonBlockNumOps" : 1242,
"AbandonBlockAvgTime" : 143.0,
"StandbyExceptionNumOps" : 25,
"StandbyExceptionAvgTime" : 0.0,
"RenewLeaseNumOps" : 314775,
"RenewLeaseAvgTime" : 657.375,
"RenameNumOps" : 1055746,
"RenameAvgTime" : 191.19230769230768,
"MkdirsNumOps" : 382334,
"MkdirsAvgTime" : 124.33,
"SetTimesNumOps" : 166,
"SetTimesAvgTime" : 0.0,
"UpdatePipelineNumOps" : 19,
"UpdatePipelineAvgTime" : 49.0,
"CompleteNumOps" : 1498564,
"CompleteAvgTime" : 272.2,
"GetFileInfoNumOps" : 89481205,
"GetFileInfoAvgTime" : 265.0645347162201
{code}
Today's metrics seems not that high for renewLease.

I always think the metrics provided by RBF too fluctuated.

> RBF: Extend RouterRpcClient to use multiple sockets
> ---
>
> Key: HDFS-13274
> URL: https://issues.apache.org/jira/browse/HDFS-13274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>
> HADOOP-13144 introduces the ability to create multiple connections for the 
> same user and use different sockets. The RouterRpcClient should use this 
> approach to get a better throughput.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117135#comment-17117135
 ] 

Íñigo Goiri commented on HDFS-15376:


+1 on  [^HDFS-15376.001.patch].

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15375) Reconstruction Work should not happen for Corrupt Block

2020-05-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117035#comment-17117035
 ] 

Hadoop QA commented on HDFS-15375:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
0s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 14s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestStripedFileAppend |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29370/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13004052/HDFS-15375.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 2b66a89f40a6 4.15.0-101-generic 

[jira] [Commented] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-05-26 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116927#comment-17116927
 ] 

Uma Maheswara Rao G commented on HDFS-15370:


Let me put out some summary about the original issue here first:
Take an example of mount link: /testme --> /a/b/c
  Current behaviors: 
 *       listStatus "/" would return FileStatus(path="/testme", *isDir=false*, 
.. )                
 *       getFileStatus("/testme") would return FileStatus(path="/testme", 
*isDir=true,* )
 
 From the implementation perspective, when doing the above operation on the 
link directory directly, they will execute this operation on targetFileSystem. 
Since getFileStatus runs on link, i.e "/testme", it is executing on target fs, 
which was initialized with the directory "/a/b/c". Above listStatus call is 
running on "/" which is not link, but it's an internal directory. So, when 
executing ls on the internal directory, we are having the following checks .
 
{code:java}
listStatus(Path path){
  if (inode.isLink()) {
     return FileStatus(path, isDir=false, ...)
   } else{
     return FileStatus(path, isDir=true, .)
   }
}
{code}
 
I think the correct behavior should be when it is link, it should get 
fileStatus on target fs and return that directory status, instead of hardcoding 
always as isDir=false.
*Proposed pseudo code:*
{code:java}
Class InternalDirViewFS{ FileStatus[] listStatus(Path path){
   if (inode.isLink()) {
       //gets the status from target link
       FileStatus fileStatus = link.targetFileSystem
              .getFileStatus(new Path(link.targetFileSystem.getUri()));
        return FileStatus(path, isDir= fileStatus.isDirectory(),.);
     } else{
        //this is InternalDir. It's alway isDir true
        return FileStatus(path, isDir=true,...);
      }
    }
 }
{code}

Let me know if this make sense to you.

*Coming to your questions:*
{quote} Question 1: 
Do we support symlink creation only in this usecase of viewfs to actual 
namespace  ( which is actually targeting only directories really in the target 
namespace ) through core-site.xml ? Or we can do it for files also ? Dont 
remember if there is any other way to do it . {quote}
There are two things mixing up here. One is mount link and other is actual 
filesystem symlink. In viewFS perspective, we are trying show the mount links 
as symlinks in FIleStatus. When targetFS supports it fs level symlinks, it just 
honor that symlink behavior. Once target resolved from ViewFS, the target fs 
behavior will not change.

For ViewFS, yes we have support only via xml config file. For target fs, if 
that supports FileSystem level symlink APIs, we will just continue to support.
For ViewFS, you cannot create regular symlinks via API. 
  
{code:java}
  /**
   * See {@link FileContext#createSymlink(Path, Path, boolean)}.
   */
  *public* *void* createSymlink(*final* Path target, *final* Path link,
      *final* *boolean* createParent) *throws* AccessControlException,
      FileAlreadyExistsException, FileNotFoundException,
      ParentNotDirectoryException, UnsupportedFileSystemException,
      IOException {
    // Supporting filesystems should override this method
    *throw* *new* UnsupportedOperationException(
        "Filesystem does not support symlinks!");
  }
{code}
We support mount links via configuration and we represent them as symlinks in 
FileStatus.
To know whether a particular fs supports actual symlinks, you can use this API 
and check. 

{code:java}
public boolean supportsSymlinks(){code}
  
{quote}Question 2: 
So we get isSymlink() as true only for symlink_name (as configured in 
fs.viewfs.mounttable.fsname.link./symlink_name=hdfs://namespace/target_dir) 
though core-site customisation and no other hdfs cli or feature does it right 
?{quote}

yes. From ViewFS perspective. Coming to target fs (that is in hdfs case), it 
will support symlinks and many other fs might not support. Ex: S3aFileSystem we 
are not supporting.
DistributedFileSystem.java#supportedSymlinks()
{code:java}
  @Override
  public boolean supportsSymlinks() {
    return true;
  }
{code}
Hope this answers will help. Thank you.
 

> listStatus and getFileStatus behave inconsistent in the case of ViewFs 
> implementation for isDirectory
> -
>
> Key: HDFS-15370
> URL: https://issues.apache.org/jira/browse/HDFS-15370
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Srinivasu Majeti
>Priority: Major
>  Labels: viewfs
>
> listStatus implementation in ViewFs and getFileStatus does not return 
> consistent values for an element on isDirectory value. listStatus returns 
> isDirectory of all softlinks as false and getFileStatus 

[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-26 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116921#comment-17116921
 ] 

hemanthboyina commented on HDFS-15376:
--

No , this is fine [~elgoiri]

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116905#comment-17116905
 ] 

Íñigo Goiri commented on HDFS-15376:


[~hemanthboyina], as we are doing this, do you think we should extend anything 
else in the doc?

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13274) RBF: Extend RouterRpcClient to use multiple sockets

2020-05-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116900#comment-17116900
 ] 

Íñigo Goiri commented on HDFS-13274:


Would it be possible to post the rest of the ops stats for comparison?
If this is only 0.1% of the ops (as this has been running for 3 months), then 
it is not that large.
It would be pretty impressive if I found the root cause right away but it is 
unlikely :)

Assuming the issue is renewLease(), I don't have a way to reduce the load of 
this... maybe cache it?
What workload is generating so many renews? Hive?

> RBF: Extend RouterRpcClient to use multiple sockets
> ---
>
> Key: HDFS-13274
> URL: https://issues.apache.org/jira/browse/HDFS-13274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>
> HADOOP-13144 introduces the ability to create multiple connections for the 
> same user and use different sockets. The RouterRpcClient should use this 
> approach to get a better throughput.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15375) Reconstruction Work should not happen for Corrupt Block

2020-05-26 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15375:
-
Attachment: HDFS-15375.001.patch
Status: Patch Available  (was: Open)

> Reconstruction Work should not happen for Corrupt Block
> ---
>
> Key: HDFS-15375
> URL: https://issues.apache.org/jira/browse/HDFS-15375
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15375-testrepro.patch, HDFS-15375.001.patch
>
>
> In BlockManager#updateNeededReconstructions , while updating the 
> NeededReconstruction we are adding Pendingreconstruction blocks to live 
> replicas
> {code:java}
>  int pendingNum = pendingReconstruction.getNumReplicas(block);
>   int curExpectedReplicas = getExpectedRedundancyNum(block);
>   if (!hasEnoughEffectiveReplicas(block, repl, pendingNum)) {
> neededReconstruction.update(block, repl.liveReplicas() + 
> pendingNum,{code}
> But if two replicas were in pending reconstruction (due to corruption) , and 
> if the third replica is corrupted the block should be in 
> QUEUE_WITH_CORRUPT_BLOCKS but because of above logic it was getting added in 
> QUEUE_LOW_REDUNDANCY , this makes the RedudancyMonitor to reconstruct a 
> corrupted block , which is wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15376) Update the error about command line POST in httpfs documentation

2020-05-26 Thread bianqi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116736#comment-17116736
 ] 

bianqi commented on HDFS-15376:
---

In HDFS-11561

 
{quote}@@ -227,6 +227,24 @@ public void testHdfsAccess() throws Exception {
 @TestDir
 @TestJetty
 @TestHdfs
 + public void testMkdirs() throws Exception
Unknown macro: \{+ createHttpFSServer(false);++ String user = 
HadoopUsersConfTestHelper.getHadoopUsers()[0];+ URL url = new 
URL(TestJettyHelper.getJettyURL(), MessageFormat.format(+ 
"/webhdfs/v1/tmp/sub-tmp?user.name={0}
=MKDIRS", user));
 + HttpURLConnection conn = (HttpURLConnection) url.openConnection();
 +{color:#ff} conn.setRequestMethod("PUT");{color}
 + conn.connect();
 + Assert.assertEquals(conn.getResponseCode(), HttpURLConnection.HTTP_OK);
 +
 + getStatus("/tmp/sub-tmp", "LISTSTATUS");
 + }
{quote}
But the document uses http POST to execute the curl command
{quote}-* `$ curl -X {color:#FF}POST{color} 
[http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=mkdirs]` creates the HDFS 
`/user/foo.bar` directory.
{quote}
{quote}+* `$ curl -X {color:#FF}POST{color} 
'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'` 
creates the HDFS `/user/foo/bar` directory.
{quote}
 

> Update the error about command line POST in httpfs documentation
> 
>
> Key: HDFS-15376
> URL: https://issues.apache.org/jira/browse/HDFS-15376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HDFS-15376.001.patch
>
>
>    In the official Hadoop documentation, there is an exception when executing 
> the following command.
> {quote} {{curl -X POST 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {quote}     *{"RemoteException":{"message":"Invalid HTTP POST operation 
> [MKDIRS]","exception":"IOException","javaClassName":"java.io.IOException"}}*
> {quote}
>      
> I checked the source code and found that the way to create the file should 
> use PUT to submit the form.
>     I modified to execute the command in PUT mode and got the result as 
> follows
> {quote}     {{curl -X PUT 
> 'http://httpfs-host:14000/webhdfs/v1/user/foo/bar?op=MKDIRS=foo'}} 
> creates the HDFS {{/user/foo/bar}} directory.
> {quote}
>      Command line returns results:
> {"boolean":true}
> . At the same time the folder is created successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116640#comment-17116640
 ] 

Hudson commented on HDFS-15373:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18295 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18295/])
HDFS-15373. Fix number of threads in (ayushsaxena: rev 
6c9f75cf16b4a321a3b6965b76c53033843ce178)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java


> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
> {{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> {{numThreads}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116627#comment-17116627
 ] 

Ayush Saxena commented on HDFS-15373:
-

Committed to trunk.
Thanx [~elgoiri] for the review!!!

Have updated the description as well.

> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
> {{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> {{numThreads}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15373:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
> {{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> {{numThreads}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15373:

Description: 
The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
right now, make it fixed.
Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
{{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
doesn't tend to get full and threads are always confined to 1 irrespective of 
{{numThreads}}

  was:
The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
right now, make it fixed.
The 


> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> Presently the {{corePoolSize}} is set to 1 and {{maximumPoolSize}} is set to 
> {{numThread}}, but since the size of Queue is {{Integer.MAX}}, the queue 
> doesn't tend to get full and threads are always confined to 1 irrespective of 
> {{numThreads}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15373) Fix number of threads in IPCLoggerChannel#createParallelExecutor

2020-05-26 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15373:

Description: 
The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
right now, make it fixed.
The 

  was:The number of threads in IPCLoggerChannel#createParallelExecutor is 
elastic right now, make it fixed.


> Fix number of threads in IPCLoggerChannel#createParallelExecutor
> 
>
> Key: HDFS-15373
> URL: https://issues.apache.org/jira/browse/HDFS-15373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15373-01.patch, HDFS-15373-02.patch
>
>
> The number of threads in IPCLoggerChannel#createParallelExecutor is elastic 
> right now, make it fixed.
> The 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15180) DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.

2020-05-26 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116541#comment-17116541
 ] 

Xiaoqiao He commented on HDFS-15180:


[~sodonnell],[~weichiu],[~junping_du] are you interested in the improvement? In 
our production env we apply this feature based on branch-2.7 and it works well 
from our side. Look forward to more feedback or suggestions and push this 
feature forward. Thanks.

>  DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
> ---
>
> Key: HDFS-15180
> URL: https://issues.apache.org/jira/browse/HDFS-15180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: zhuqi
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, 
> HDFS-15180.003.patch, HDFS-15180.004.patch, 
> image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, 
> image-2020-03-10-17-34-26-368.png, image-2020-04-09-11-20-36-459.png
>
>
> Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in 
> big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14984) HDFS setQuota: Error message should be added for invalid input max range value to hdfs dfsadmin -setQuota command

2020-05-26 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116519#comment-17116519
 ] 

Zhao Yi Ming commented on HDFS-14984:
-

Thanks [~hemanthboyina] ! Assign to me to work on.

> HDFS setQuota: Error message should be added for invalid input max range 
> value to hdfs dfsadmin -setQuota command
> -
>
> Key: HDFS-14984
> URL: https://issues.apache.org/jira/browse/HDFS-14984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Souryakanta Dwivedy
>Assignee: Zhao Yi Ming
>Priority: Minor
> Attachments: image-2019-11-13-14-05-19-603.png, 
> image-2019-11-13-14-07-04-536.png
>
>
> An error message should be added for invalid input max range value 
> "9223372036854775807" to hdfs dfsadmin -setQuota command
>  * set quota for a directory with invalid input vlaue as 
> "9223372036854775807"- set quota for a directory with invalid input vlaue as 
> "9223372036854775807"   the command will be successful without displaying any 
> result.Quota value    will not be set for the directory internally,but it 
> will be better from user usage point of view  if an error message will 
> display for the invalid max range value "9223372036854775807" as it is 
> displaying    while setting the input value as "0"   For example "hdfs 
> dfsadmin -setQuota  9223372036854775807 /quota"        
>              !image-2019-11-13-14-05-19-603.png!
>  
>  *   - Try to set quota for a directory with invalid input value as "0"   It 
> will throw an error message as "setQuota: Invalid values for quota : 0 and 
> 9223372036854775807"       For example "hdfs dfsadmin -setQuota 0 /quota" 
>           !image-2019-11-13-14-07-04-536.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14984) HDFS setQuota: Error message should be added for invalid input max range value to hdfs dfsadmin -setQuota command

2020-05-26 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming reassigned HDFS-14984:
---

Assignee: Zhao Yi Ming

> HDFS setQuota: Error message should be added for invalid input max range 
> value to hdfs dfsadmin -setQuota command
> -
>
> Key: HDFS-14984
> URL: https://issues.apache.org/jira/browse/HDFS-14984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Souryakanta Dwivedy
>Assignee: Zhao Yi Ming
>Priority: Minor
> Attachments: image-2019-11-13-14-05-19-603.png, 
> image-2019-11-13-14-07-04-536.png
>
>
> An error message should be added for invalid input max range value 
> "9223372036854775807" to hdfs dfsadmin -setQuota command
>  * set quota for a directory with invalid input vlaue as 
> "9223372036854775807"- set quota for a directory with invalid input vlaue as 
> "9223372036854775807"   the command will be successful without displaying any 
> result.Quota value    will not be set for the directory internally,but it 
> will be better from user usage point of view  if an error message will 
> display for the invalid max range value "9223372036854775807" as it is 
> displaying    while setting the input value as "0"   For example "hdfs 
> dfsadmin -setQuota  9223372036854775807 /quota"        
>              !image-2019-11-13-14-05-19-603.png!
>  
>  *   - Try to set quota for a directory with invalid input value as "0"   It 
> will throw an error message as "setQuota: Invalid values for quota : 0 and 
> 9223372036854775807"       For example "hdfs dfsadmin -setQuota 0 /quota" 
>           !image-2019-11-13-14-07-04-536.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14984) HDFS setQuota: Error message should be added for invalid input max range value to hdfs dfsadmin -setQuota command

2020-05-26 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116475#comment-17116475
 ] 

hemanthboyina commented on HDFS-14984:
--

thanks for the interest [~zhaoyim] 

you can work on this issue and  assign this issue to yourself 

 

> HDFS setQuota: Error message should be added for invalid input max range 
> value to hdfs dfsadmin -setQuota command
> -
>
> Key: HDFS-14984
> URL: https://issues.apache.org/jira/browse/HDFS-14984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: image-2019-11-13-14-05-19-603.png, 
> image-2019-11-13-14-07-04-536.png
>
>
> An error message should be added for invalid input max range value 
> "9223372036854775807" to hdfs dfsadmin -setQuota command
>  * set quota for a directory with invalid input vlaue as 
> "9223372036854775807"- set quota for a directory with invalid input vlaue as 
> "9223372036854775807"   the command will be successful without displaying any 
> result.Quota value    will not be set for the directory internally,but it 
> will be better from user usage point of view  if an error message will 
> display for the invalid max range value "9223372036854775807" as it is 
> displaying    while setting the input value as "0"   For example "hdfs 
> dfsadmin -setQuota  9223372036854775807 /quota"        
>              !image-2019-11-13-14-05-19-603.png!
>  
>  *   - Try to set quota for a directory with invalid input value as "0"   It 
> will throw an error message as "setQuota: Invalid values for quota : 0 and 
> 9223372036854775807"       For example "hdfs dfsadmin -setQuota 0 /quota" 
>           !image-2019-11-13-14-07-04-536.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org