[jira] [Commented] (HDFS-10453) ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster.

2018-02-03 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351434#comment-16351434
 ] 

He Xiaoqiao commented on HDFS-10453:


hi [~ajayydv]
{quote}This check {{if(bc == null || (bc.isUnderConstruction() && 
block.equals(bc.getLastBlock(}} in 
{{BlockManager#computeReplicationWorkForBlocks}} exists already. So, if it was 
handling this case we will not face it. I think this additional check for 
{{block.getNumBytes() == BlockCommand.NO_ACK}} is required.
{quote}
Since BlockManager#computeReplicationWorkForBlocks checks if \{{(bc == null || 
(bc.isUnderConstruction() && block.equals(bc.getLastBlock()))}} after check if 
\{{target == null}} which is always true, then the block will not be removed 
from {{neededReplications}}, ReplicationMonitor#run will continue to 
chooseTargets for the same block and no node will be selected even if traverse 
all nodes of cluster in next loop and so on. This case will avoid if check 
\{{block == null}} before \{{target == null}}.

> ReplicationMonitor thread could stuck for long time due to the race between 
> replication and delete of same file in a large cluster.
> ---
>
> Key: HDFS-10453
> URL: https://issues.apache.org/jira/browse/HDFS-10453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.1, 2.5.2, 2.7.1, 2.6.4
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.7.6
>
> Attachments: HDFS-10453-branch-2.001.patch, 
> HDFS-10453-branch-2.003.patch, HDFS-10453-branch-2.7.004.patch, 
> HDFS-10453-branch-2.7.005.patch, HDFS-10453-branch-2.7.006.patch, 
> HDFS-10453.001.patch
>
>
> ReplicationMonitor thread could stuck for long time and loss data with little 
> probability. Consider the typical scenario:
> (1) create and close a file with the default replicas(3);
> (2) increase replication (to 10) of the file.
> (3) delete the file while ReplicationMonitor is scheduling blocks belong to 
> that file for replications.
> if ReplicationMonitor stuck reappeared, NameNode will print log as:
> {code:xml}
> 2016-04-19 10:20:48,083 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> ..
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 7 but only 0 storage types can be selected 
> (replication=10, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, 
> DISK, DISK, DISK, DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2016-04-19 10:21:17,184 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 7 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) All required storage types are unavailable:  
> unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> This is because 2 threads (#NameNodeRpcServer and #ReplicationMonitor) 
> process same block at the same moment.
> (1) ReplicationMonitor#computeReplicationWorkForBlocks get blocks to 
> replicate and leave the global lock.
> (2) FSNamesystem#delete invoked to delete blocks then clear the reference in 
> blocksmap, needReplications, etc. the block's NumBytes will set 
> NO_ACK(Long.MAX_VALUE) which is used to indicate that the block deletion does 
> not need explicit ACK from the node. 
> (3) ReplicationMonitor#computeReplicationWorkForBlocks continue to 
> chooseTargets for the same blocks and no node will be selected after traverse 
> whole cluster because  no node choice satisfy the goodness 

[jira] [Created] (HDFS-13105) Make hadoop proxy user changes reconfigurable in Datanode

2018-02-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-13105:


 Summary: Make hadoop proxy user changes reconfigurable in Datanode
 Key: HDFS-13105
 URL: https://issues.apache.org/jira/browse/HDFS-13105
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Currently any changes to add/delete a new proxy user requires DN restart 
requiring a downtime. This jira proposes to make the changes in proxy/user 
configuration reconfiguration via that ReconfigurationProtocol so that the 
changes can take effect without a DN restart. For details please refer 
https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/Superusers.html.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13104) Make hadoop proxy user changes reconfigurable in Namenode

2018-02-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351342#comment-16351342
 ] 

genericqa commented on HDFS-13104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-13104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909084/HDFS-13104.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d2af07636275 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2021f4b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22931/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22931/testReport/ |
| Max. process+thread count | 2970 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22931/console |

[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests and causes data copying to fail

2018-02-03 Thread caixiaofeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351338#comment-16351338
 ] 

caixiaofeng commented on HDFS-9092:
---

mark  this.

> Nfs silently drops overlapping write requests and causes data copying to fail
> -
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-9092.001.patch, HDFS-9092.002.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-03 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13100:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.4
   3.0.1
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by 

[jira] [Commented] (HDFS-13100) Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs.

2018-02-03 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351325#comment-16351325
 ] 

Yongjun Zhang commented on HDFS-13100:
--

Committed to trunk, branch-3.0.1, branch-2, branch-2.9, branch-2.8.

Thanks [~kihwal] much for the review, and [~atm] for the contribution!


> Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in 
> webhdfs.
> -
>
> Key: HDFS-13100
> URL: https://issues.apache.org/jira/browse/HDFS-13100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HDFS-13100.001.patch, HDFS-13100.002.patch
>
>
> HDFS-12386 added getserverdefaults call to webhdfs, and expect clusters that 
> don't support this to throw UnsupportedOperationException.  However, we are 
> seeing
> {code}
> hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb 
> -update -skipcrccheck webhdfs://:/fileX 
> hdfs://:8020/scale1/fileY
> ...
> 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered 
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368)
>   at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> {code}
> We either need to make the server throw UnsupportedOperationException, or 
> make the client to handle IllegalArgumentException. For backward 
> compatibility and easier operation in the field, the latter is preferred.
> But we'd better understand why IllegalArgumentException is thrown instead of 
> UnsupportedOperationException is thrown. 
> The correct way to do is: check if the operation is supported, and throw the 
> UnsurportedOperationExcetion if not; then check if parameter is legal, throw 
> IllegalArgumentException is it's not legal. We can do that fix as follow-up 
> of this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-11801) Extra argument for header in dfs -count output

2018-02-03 Thread caixiaofeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351320#comment-16351320
 ] 

caixiaofeng commented on HDFS-11801:


hdfs dfs -count -q -h -v /      will be useful

> Extra argument for header in dfs -count output
> --
>
> Key: HDFS-11801
> URL: https://issues.apache.org/jira/browse/HDFS-11801
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: shell, webhdfs
>Affects Versions: 3.0.0-alpha2
>Reporter: Sachin Jose
>Priority: Minor
>
> Hi Team, 
>   
>   Output of hdfs dfs -count -q contains 8 fields, It would be really useful 
> for users if the output of hdfs dfs -count utility contains header. I always 
> refer -count apache documentation or internal docs to figure out the 
> different fields of hdfs dfs -count -q. 
> No need to add header in the default count output. If a user wants to print 
> the header, he needs to explicitly specify the argument, there by default 
> behavior remains same. 
> Since -h is used to human readable, -header option can be used for header. 
> Thanks,
> Sachin
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org