[jira] [Commented] (HDFS-12876) Ozone: moving NodeType from OzoneConsts to Ozone.proto

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274072#comment-16274072
 ] 

genericqa commented on HDFS-12876:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900148/HDFS-12876-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 6c09fb2f565b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.

2017-11-30 Thread Wang XL (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-12862:
---
Summary: CacheDirective may invalidata,when NN restart or make a transition 
to Active.  (was: When modify cacheDirective ,editLog may serial relative 
expiryTime)

> CacheDirective may invalidata,when NN restart or make a transition to Active.
> -
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274006#comment-16274006
 ] 

genericqa commented on HDFS-12713:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 13 new + 581 unchanged 
- 6 fixed = 594 total (was 587) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12713 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900134/HDFS-12713-HDFS-9806.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-11-30 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273971#comment-16273971
 ] 

Surendra Singh Lilhore commented on HDFS-10285:
---

Thanks [~umamaheswararao], [~rakeshr] and [~anu] for new design. Starting SPS 
as independent service is good proposal, really it will reduce the namenode 
workload.

I have one question for new design. How other services (like hbase) will 
communicate with SPS ? Do they need to create new client for SPS service or 
{{DistributedFileSystem}}  API only internally redirect call to the SPS service 
?  

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273963#comment-16273963
 ] 

genericqa commented on HDFS-12591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
23s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 428 unchanged - 0 fixed = 435 total (was 428) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Should 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap$LevelDBReader$FRIterator
 be a _static_ inner class?  At LevelDBFileRegionAliasMap.java:inner class?  At 
LevelDBFileRegionAliasMap.java:[lines 168-198] |
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900139/HDFS-12591-HDFS-9806.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 10967c474e3c 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / 64be409 |
| maven | 

[jira] [Updated] (HDFS-12876) Ozone: moving NodeType from OzoneConsts to Ozone.proto

2017-11-30 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12876:
---
Attachment: HDFS-12876-HDFS-7240.000.patch

Re-attaching the patch to trigger Jenkins.

> Ozone: moving NodeType from OzoneConsts to Ozone.proto
> --
>
> Key: HDFS-12876
> URL: https://issues.apache.org/jira/browse/HDFS-12876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12876-HDFS-7240.000.patch, 
> HDFS-12876-HDFS-7240.000.patch
>
>
> Since we will be using {{NodeType}} in Service Discovery API - HDFS-12868, 
> it's better to have the enum in Ozone.proto than OzoneConsts. We need 
> {{NodeType}} in protobuf messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273913#comment-16273913
 ] 

genericqa commented on HDFS-11576:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestParallelUnixDomainRead |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11576 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900119/HDFS-11576.014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 427f8f7a73d0 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273887#comment-16273887
 ] 

Hudson commented on HDFS-12638:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13302 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13302/])
HDFS-12638. Delete copy-on-truncate block along with the original block, (shv: 
rev 60fd0d7fd73198fd610e59d1a4cd007c5fcc7205)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java


> Delete copy-on-truncate block along with the original block, when deleting a 
> file being truncated
> -
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Fix For: 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-12638.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.4
   3.0.1
   2.9.1
   2.10.0
   3.1.0
   2.7.5

Just committed this into the following branches:
{code}
   3c57def..7998077  branch-2 -> branch-2
   7252e18..85eb32b  branch-2.7 -> branch-2.7
   eacccf1..19c18f7  branch-2.8 -> branch-2.8
   5a8a1e6..0f5ec01  branch-2.9 -> branch-2.9
   58d849b..def87db  branch-3.0 -> branch-3.0
   a63d19d..60fd0d7  trunk -> trunk
{code}
Thank you everybody for contributing.

> Delete copy-on-truncate block along with the original block, when deleting a 
> file being truncated
> -
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Fix For: 2.7.5, 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4
>
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12591:
--
Attachment: HDFS-12591-HDFS-9806.004.patch

Posting a new patch (v4) that rebases v3 on the latest feature branch, and 
removes the shared reference to the leveldb between {{LevelDBReader}} and 
{{LevelDBWriter}}. The reader and writer are not intended to be used 
simultaneously as provided storage just supports read-only as of now (hence, 
removed {{testSimultaneousWriteAndRead}}). [~ehiggs], can you take a look at 
this?

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch, 
> HDFS-12591-HDFS-9806.002.patch, HDFS-12591-HDFS-9806.003.patch, 
> HDFS-12591-HDFS-9806.004.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Status: Patch Available  (was: Open)

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Status: Open  (was: Patch Available)

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Attachment: HDFS-12713-HDFS-9806.003.patch

Posting patch v3 that builds on v2, rebases it on the current branch, fixes 
{{TextFileRegionAliasMap}} and {{InMemoryLevelDBAliasMapClient}} to ensure that 
the block pool id parameter in {{getReader}} and {{getWriter}} is handled 
correctly.

[~ehiggs], one issue that came up when running 
{{TestNameNodeProvidedImplementation.testInMemoryAliasMap}} with v2, is that 
when the Datanodes try to load blocks (in 
{{ProvidedBlockPoolSlice.fetchVolumeMap}}) using 
{{InMemoryLevelDBAliasMapClient}}, the request to the Namenode might fail. As a 
result, the datanodes do not start up. Retrying helps. As a temporary fix, I 
added a simple retry logic in {{ProvidedBlockPoolSlice.fetchVolumeMap}} in v3. 
We need a more permanent fix for this issue. I think this should be done in a 
separate JIRA.

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Status: Open  (was: Patch Available)

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Status: Patch Available  (was: Open)

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated

2017-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273824#comment-16273824
 ] 

Konstantin Shvachko edited comment on HDFS-12638 at 12/1/17 2:15 AM:
-

Patch 004 adds only braces to 'if' statement per checkstyle warning. Won't 
bother waiting for another Jenkins run.
Also verified that all failed tests pass locally, except 
{{TestDataNodeVolumeFailureReporting}}, which is tracked in a series of jiras.


was (Author: shv):
Patch 004 adds only braces to 'if' statement per checkstyle warning. Won't 
bother waiting for another Jenkins run.

> Delete copy-on-truncate block along with the original block, when deleting a 
> file being truncated
> -
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) Delete copy-on-truncate block along with the original block, when deleting a file being truncated

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Summary: Delete copy-on-truncate block along with the original block, when 
deleting a file being truncated  (was: NameNode exits due to ReplicationMonitor 
thread received Runtime exception in ReplicationWork#chooseTargets)

> Delete copy-on-truncate block along with the original block, when deleting a 
> file being truncated
> -
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Attachment: HDFS-12638.004.patch

Patch 004 adds only braces to 'if' statement per checkstyle warning. Won't 
bother waiting for another Jenkins run.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, HDFS-12638.004.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Assignee: Konstantin Shvachko
  Status: Open  (was: Patch Available)

Thanks for the review, [~zhz]. I totally agree that truncateBlock should be 
BlockInfo rather than as currently Block. Intended to file a jira to change 
that.
Will update the title.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12845) JournalNode Test failures

2017-11-30 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273804#comment-16273804
 ] 

Ajay Kumar commented on HDFS-12845:
---

LGTM

> JournalNode Test failures
> -
>
> Key: HDFS-12845
> URL: https://issues.apache.org/jira/browse/HDFS-12845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node
>Affects Versions: 3.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12845-branch-3.0.00.patch
>
>
> testJournalNodeSyncerNotStartWhenSyncEnabled(org.apache.hadoop.hdfs.qjournal.server.TestJournalNode)
>   Time elapsed: 0.258 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.testJournalNodeSyncerNotStartWhenSyncEnabled(TestJournalNode.java:448)
> testJournalNodeSyncerNotStartWhenSyncEnabledIncorrectURI(org.apache.hadoop.hdfs.qjournal.server.TestJournalNode)
>   Time elapsed: 0.224 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.testJournalNodeSyncerNotStartWhenSyncEnabledIncorrectURI(TestJournalNode.java:427)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-11-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273744#comment-16273744
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Discussion summary:

[~rakeshr], [~anu] and Me had discussion on the SPS.

Anu proposed couple of options to start SPS as separate service in HDFS.
   1. Run SPS as tool like Balancer 
Discussion conclusion: We have this option already. For the java 
applications it is difficult to control the movement via tool. For different 
Java applications, they need to run different tool instances and there will not 
be any co-ordination on load etc. Conclusion is, we will not adopt this 
approach.
   2. Run SPS at client side itself.
   Discussion Conclusion: This also has similar effect like lacking of 
co-ordination. It is difficult to maintain the requests from users without 
persisting. When process restarts, every such application needs to scan 
complete namespace for knowing SPS xattr and also difficult to distinguish the 
Xattrs of different application. Conclusion is, we will not adopt this approach.
   3. Run SPS as independent Service.
Discussion Conclusion: This is more like start SPS in different JVM 
instead of a thread in same JVM of Namnode. This option seems like reasonable 
on compromising the extra process maintenance.

  The plan is to start SPS daemon as independent service. It will have its won 
RPC server. SPS sends the move requests directly to Das instead of via NN-DN 
heartbeats.
  NN will expose required information to SPS, then SPS will pull the 
information and analyze. 
  Make sure to keep SPS as sateless. SPS state should be able to rebuild by 
querying the Xattrs Inodes from Namenode on restarts. This make the SPS HA 
implementation simple. 

Advantages: 1. This will avoid Namenode burden due to SPS work
 2. This could be a first step for bringing other similar 
service to this process in future.
Disadvantage: one extra process in cluster. I think should be a reasonable 
compromise

If any concerns on this changes, please comment early, so that we could 
in-corporate. Other wise it will be painful to rework after finishing the work. 
We will update in design doc soon about the new changes to SPS.

Thanks to Anu for your help on reviews and also offering his help on making SPS 
as separate process work. Appreciate your contributions. Thank you.

[~rakeshr] [~anu] please update if I miss any points from discussion.




> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and 

[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273742#comment-16273742
 ] 

genericqa commented on HDFS-12396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
57s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.8 has 
1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.8 
has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} root: The patch generated 2 new + 213 unchanged 
- 2 fixed = 215 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:5 |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.server.datanode.TestBatchIbr |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetricsLogger |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| Timed out junit tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12396 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273739#comment-16273739
 ] 

genericqa commented on HDFS-12051:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 634 unchanged - 17 fixed = 638 total (was 651) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900092/HDFS-12051.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fb27d2c05243 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a409425 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22239/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22239/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22239/testReport/ |
| Max. process+thread count | 3152 (vs. ulimit of 5000) |

[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273738#comment-16273738
 ] 

genericqa commented on HDFS-12396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 23s{color} 
| {color:red} root generated 1 new + 962 unchanged - 1 fixed = 963 total (was 
963) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 12s{color} | {color:orange} root: The patch generated 2 new + 212 unchanged 
- 2 fixed = 214 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:12 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestWriteRead |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade |
|   | org.apache.hadoop.hdfs.TestApplyingStoragePolicy |
|   | org.apache.hadoop.hdfs.TestDFSAddressConfig |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.TestDFSRollback |
|   | org.apache.hadoop.hdfs.TestDFSClientExcludedNodes |
|   | org.apache.hadoop.hdfs.TestClientReportBadBlock |
|   | org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication |
|   | org.apache.hadoop.hdfs.TestAbandonBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd |
| JIRA Issue | HDFS-12396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900099/HDFS-12396-branch-2.8.patch
 |
| Optional Tests |  

[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-11-30 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Attachment: HDFS-11576.014.patch

Patch 014 to fix checkstyle

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, 
> HDFS-11576.012.patch, HDFS-11576.013.patch, HDFS-11576.014.patch, 
> HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273717#comment-16273717
 ] 

genericqa commented on HDFS-12000:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}212m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.ozone.tools.TestCorona |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |

[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Attachment: (was: HDFS-12713-HDFS-9806.003.patch)

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12713:
--
Attachment: HDFS-12713-HDFS-9806.003.patch

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch, 
> HDFS-12713-HDFS-9806.002.patch, HDFS-12713-HDFS-9806.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12769) TestReadStripedFileWithDecodingCorruptData and TestReadStripedFileWithDecodingDeletedData timeout in trunk

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDFS-12769:
-

Assignee: (was: Ajay Kumar)

> TestReadStripedFileWithDecodingCorruptData and 
> TestReadStripedFileWithDecodingDeletedData timeout in trunk
> --
>
> Key: HDFS-12769
> URL: https://issues.apache.org/jira/browse/HDFS-12769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>
> Recently, TestReadStripedFileWithDecodingCorruptData and 
> TestReadStripedFileWithDecodingDeletedData fail frequently.
> For example, in HDFS-12725. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12769) TestReadStripedFileWithDecodingCorruptData and TestReadStripedFileWithDecodingDeletedData timeout in trunk

2017-11-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDFS-12769:
-

Assignee: Ajay Kumar

> TestReadStripedFileWithDecodingCorruptData and 
> TestReadStripedFileWithDecodingDeletedData timeout in trunk
> --
>
> Key: HDFS-12769
> URL: https://issues.apache.org/jira/browse/HDFS-12769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Kumar
>
> Recently, TestReadStripedFileWithDecodingCorruptData and 
> TestReadStripedFileWithDecodingDeletedData fail frequently.
> For example, in HDFS-12725. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273676#comment-16273676
 ] 

Hudson commented on HDFS-12877:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13300/])
HDFS-12877. Add open(PathHandle) with default buffersize (cdouglas: rev 
0780fdb1eb19744fbbca7fb05f8fe4bf4d28)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch, 
> HDFS-12877.02.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12877:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Thanks for the review, [~vagarychen].

I committed this

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Fix For: 3.1.0
>
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch, 
> HDFS-12877.02.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273631#comment-16273631
 ] 

genericqa commented on HDFS-12638:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 64 unchanged - 0 fixed = 66 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}142m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12638 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900072/HDFS-12638.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ab241a5c43d4 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273621#comment-16273621
 ] 

Zhe Zhang commented on HDFS-12638:
--

Thanks [~shv]. +1 on the v3 patch (pretty clear fix to remove orphan 
copy-on-truncate block). Two minor comments:

# Shouldn't {{BlockUnderConstructionFeature#truncateBlock}} be a {{BlockInfo}}? 
All its value assignments are from a BI. If you agree, I'm OK with addressing 
this in a separate JIRA.
# The title doesn't really reflect the fix (I imagine ReplicationMonitor 
crashing is only one of the symptoms). Could you update it?

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273618#comment-16273618
 ] 

Chen Liang commented on HDFS-12877:
---

Thanks [~chris.douglas] for the quick update! Makes sense to me. +1 on v02 
patch.

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch, 
> HDFS-12877.02.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9754) Avoid unnecessary getBlockCollection calls in BlockManager

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-9754.
---
Resolution: Fixed

Resolving this, based on the discussion in HDFS-12638.
Filed HDFS-12880 instead.

> Avoid unnecessary getBlockCollection calls in BlockManager
> --
>
> Key: HDFS-9754
> URL: https://issues.apache.org/jira/browse/HDFS-9754
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.8.2, 3.0.0-alpha1, 2.9.0
>
> Attachments: HDFS-9754.000.patch, HDFS-9754.001.patch, 
> HDFS-9754.002.patch
>
>
> Currently BlockManager calls {{Namesystem#getBlockCollection}} in order to:
> 1. check if the block has already been abandoned
> 2. identify the storage policy of the block
> 3. meta save
> For #1 we can use BlockInfo's internal state instead of checking if the 
> corresponding file still exists.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12880) Disallow abandoned blocks in the BlocksMap

2017-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273612#comment-16273612
 ] 

Konstantin Shvachko commented on HDFS-12880:


File delete is implemented as a two phase action, with each phase protected by 
a separate lock segment. In the first phase NN collects blocks for deletion, 
invalidates them, and deletes the file INode. Then it releases the lock. Later 
it acquires the lock again and in the second phase removes collected blocks 
from BlocksMap.
In a brief moment between the two phases BlocksMap contains abandoned blocks 
that do not belong to any files.
We can fix it by redistributing actions between the two phases:
# Collect blocks and INodes to be deleted. Also remove the target INode from 
its parent {{removeChild()}}
# Delete collected blocks from BlocksMap and invalidate them, then delete 
collected INodes from INodeMap.

This should prevent tools like Fsck and ReplicationMonitor from accessing 
abandoned blocks.

> Disallow abandoned blocks in the BlocksMap
> --
>
> Key: HDFS-12880
> URL: https://issues.apache.org/jira/browse/HDFS-12880
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.4
>Reporter: Konstantin Shvachko
>
> BlocksMap used to contain only valid blocks, that is belonging to a file. The 
> issue is intended to restore this invariant. This was discussed in details 
> while fixing HDFS-12638



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Rae Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273470#comment-16273470
 ] 

Rae Marks edited comment on HDFS-12873 at 11/30/17 10:53 PM:
-

Oops, not very descriptive of me. I meant if you try to get the file status of 
a file using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, getFileStatus on {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's info ({{x}}) - it just 
throws away {{I/don't/exist}}.


was (Author: raemarks):
Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, getFileStatus on {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's info ({{x}}) - it just 
throws away {{I/don't/exist}}.

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Rae Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12879) Ozone : add scm init command to document.

2017-11-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273609#comment-16273609
 ] 

Chen Liang commented on HDFS-12879:
---

There is another typo in {{OzoneCommandShell.md}}, which I think can be just 
part of this fix : there is {{-listtBucket}} which should be {{-listBucket}} 
instead.

> Ozone : add scm init command to document.
> -
>
> Key: HDFS-12879
> URL: https://issues.apache.org/jira/browse/HDFS-12879
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: newbie
>
> When an Ozone cluster is initialized, before starting SCM through {{hdfs 
> --daemon start scm}}, the command {{hdfs scm -init}} needs to be called 
> first. But seems this command is not being documented. We should add this 
> note to document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12880) Disallow abandoned blocks in the BlocksMap

2017-11-30 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-12880:
--

 Summary: Disallow abandoned blocks in the BlocksMap
 Key: HDFS-12880
 URL: https://issues.apache.org/jira/browse/HDFS-12880
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.4
Reporter: Konstantin Shvachko


BlocksMap used to contain only valid blocks, that is belonging to a file. The 
issue is intended to restore this invariant. This was discussed in details 
while fixing HDFS-12638



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273589#comment-16273589
 ] 

Chris Douglas commented on HDFS-12877:
--

The checkstyle errors introduced by the patch are consistent with the 
surrounding context. I'd rather keep the history of changes visible (e.g., for 
git blame) than conform to the checkstyle rule, here. [~vagarychen]?

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch, 
> HDFS-12877.02.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Attachment: HDFS-12396-branch-2.8.patch
HDFS-12396-branch-2.patch

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396-branch-2.8.patch, HDFS-12396-branch-2.patch, 
> HDFS-12396.001.patch, HDFS-12396.002.patch, HDFS-12396.003.patch, 
> HDFS-12396.004.patch, HDFS-12396.005.patch, HDFS-12396.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12879) Ozone : add scm init command to document.

2017-11-30 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12879:
--
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-7240

> Ozone : add scm init command to document.
> -
>
> Key: HDFS-12879
> URL: https://issues.apache.org/jira/browse/HDFS-12879
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: newbie
>
> When an Ozone cluster is initialized, before starting SCM through {{hdfs 
> --daemon start scm}}, the command {{hdfs scm -init}} needs to be called 
> first. But seems this command is not being documented. We should add this 
> note to document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12879) Ozone : add scm init command to document.

2017-11-30 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12879:
--
Labels: newbie  (was: )

> Ozone : add scm init command to document.
> -
>
> Key: HDFS-12879
> URL: https://issues.apache.org/jira/browse/HDFS-12879
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: newbie
>
> When an Ozone cluster is initialized, before starting SCM through {{hdfs 
> --daemon start scm}}, the command {{hdfs scm -init}} needs to be called 
> first. But seems this command is not being documented. We should add this 
> note to document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Status: Patch Available  (was: Open)

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch, 
> HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, 
> HDFS-12396.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12355) Webhdfs needs to support encryption zones.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12355:
--
Target Version/s: 3.1.0, 2.8.4  (was: 3.1.0)

> Webhdfs needs to support encryption zones.
> --
>
> Key: HDFS-12355
> URL: https://issues.apache.org/jira/browse/HDFS-12355
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> Will create a sub tasks.
> 1. Add fsserverdefaults to {{NamenodeWebhdfsMethods}}.
> 2. Return File encryption info in {{GETFILESTATUS}} call from 
> {{NamenodeWebhdfsMethods}}
> 3. Adding {{CryptoInputStream}} and {{CryptoOutputStream}} to InputStream and 
> OutputStream.
> 4. {{WebhdfsFilesystem}} needs to acquire kms delegation token from kms 
> servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Attachment: HDFS-12396.006.patch

Attaching trunk patch.
Will attach other patches soon.

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch, 
> HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch, 
> HDFS-12396.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Status: Open  (was: Patch Available)

Thanks [~daryn] for the review.
Will attach a new patch addressing the minor conflicts.

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch, 
> HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-11-30 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Attachment: HDFS-12051.03.patch

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: 

[jira] [Updated] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-11-30 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HDFS-12051:
--
Status: Patch Available  (was: In Progress)

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, 
> HDFS-12051.03.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these arrays are referenced by 
> {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}}
>  and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}:
> {code}
> 19. DUPLICATE PRIMITIVE ARRAYS
> Types of duplicate objects:
>  Ovhd Num objs  Num unique objs   Class name
> 3,220,272K (6.5%)   104749528  25760871 byte[]
> 
>   1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
> 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 
> of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 
> 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 
> 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 
> of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 
> 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 
> 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
> ... and 45902395 more arrays, of which 13158084 are unique
>  <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name 
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode 
> <--  {j.u.ArrayList} <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 
> elements) ... <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
>   409,830K (0.8%), 13482787 dup arrays (13260241 unique)
> 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of 
> byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
> ... and 13479257 more arrays, of which 13260231 are unique
>  <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: 

[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-11-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273514#comment-16273514
 ] 

Daryn Sharp commented on HDFS-12396:


+1  Not super-excited about the apis, but this just makes webhdfs act as 
good/bad as hdfs.  Looks like everything is suitably annotated as private so we 
can improve in the future.

The patch needs to be refreshed to fix some conflicts.

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch, 
> HDFS-12396.003.patch, HDFS-12396.004.patch, HDFS-12396.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273470#comment-16273470
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 9:38 PM:
--

Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, getFileStatus on {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's info ({{x}}) - it just 
throws away {{I/don't/exist}}.


was (Author: raemarks):
Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's inode number ({{x}}) - it 
just throws away {{I/don't/exist}}.

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273470#comment-16273470
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 9:29 PM:
--

Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's inode number ({{x}}) - it 
just throws away {{I/don't/exist}}.


was (Author: raemarks):
Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's inode number (x) - it just 
throws away {{I/don't/exist}}.

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273470#comment-16273470
 ] 

Raeanne J Marks commented on HDFS-12873:


Oops, not very descriptive. I meant if you try to get the file status of a file 
using a path containing {{..}} after the 4th index, but I was recalling 
something different and was incorrect. Please disregard that comment. 

What I mentioned about ignoring anything after the {{..}} in the 4th index is 
true though. For example, {{'/.reserved/.inodes//../I/don't/exist}} would return y's parent's inode number (x) - it just 
throws away {{I/don't/exist}}.

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273424#comment-16273424
 ] 

genericqa commented on HDFS-11576:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 1 new + 375 unchanged 
- 0 fixed = 376 total (was 375) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11576 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900061/HDFS-11576.013.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d57ffd4d300e 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273408#comment-16273408
 ] 

genericqa commented on HDFS-12877:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  9s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 262 unchanged - 0 fixed = 264 total (was 262) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
0m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
29s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900070/HDFS-12877.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 892f15e1d9d3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5cfaee2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22236/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22236/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22236/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273406#comment-16273406
 ] 

Raeanne J Marks commented on HDFS-12873:


I've been poking this code a lot lately and it seems to only support the {{..}} 
if it's directly after the inode number, so yes only at the 4th index. If there 
is {{..}} after the 4th index and the 4th index is not {{..}}, it fails. Also, 
the NameNode appears to silently throw away any path components following 
{{/.reserved/.inodes//..}} instead of failing, which could be 
misleading. 

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273394#comment-16273394
 ] 

Rushabh S Shah commented on HDFS-12873:
---

[~brandonli], [~sureshms]: is the patch from {{HDFS-5104}} just expected to 
handle {{..}} _only_ at 4th index  if we split the path name by {{/}} ?

> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273384#comment-16273384
 ] 

Rushabh S Shah commented on HDFS-12873:
---

bq. By it worked as expected, do you mean it compressed the path to 
/user/rushabhs/benchmarks/test2, or threw an exception?
By worked as expected, I meant it created {{/user/rushabhs/benchmarks/test2}} 
directory.
But the normalization of {{..}} was done on the dfs client side.
Since you are not doing any client validation, namenode is handling the raw 
path.
HDFS-5104 added the support to handle {{..}} in the path but exactly at 4th 
index {{/.reserved/.inodes//../}} (split by /)
Pre HDFS-5104, if the path contains {{..}} when it reaches namenode, it threw 
{{InvalidPathException}}
But this patch introduced a bug where {{..}} can slip through.

{code:title=FsDirectory.java|borderStyle=solid}

byte[][] 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolveDotInodesPath(byte[][]
 pathComponents, FSDirectory fsd) throws FileNotFoundException
...
...
// Handle single ".." for NFS lookup support.
if ((pathComponents.length > 4)
&& Arrays.equals(pathComponents[4], DOT_DOT)) {
  INode parent = inode.getParent();
  if (parent == null || parent.getId() == INodeId.ROOT_INODE_ID) {
// inode is root, or its parent is root.
return new byte[][]{INodeDirectory.ROOT_NAME};
  }
  return parent.getPathComponents();
}
{code}

According to above code snippet from HDFS-5104, it looks like that jira just 
expected to have {{..}}  at the 4th index of the path.
So the fix should be in {{DFSUtilClient#isValidName}} to throw 
{{InvalidPathException}} if {{..}} is present and not at the 4th index.


> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12594) snapshotDiff fails if the report exceeds the RPC response limit

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273383#comment-16273383
 ] 

Hudson commented on HDFS-12594:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13298/])
HDFS-12594. snapshotDiff fails if the report exceeds the RPC response 
(szetszwo: rev b1c7654ee40b372ed777525a42981c7cf55b5c72)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/SnapshotDiffReportGenerator.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> snapshotDiff fails if the report exceeds the RPC response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: 3.1.0
>
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> HDFS-12594.009.patch, HDFS-12594.010.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: 

[jira] [Commented] (HDFS-12838) Ozone: Optimize number of allocated block rpc by aggregating multiple block allocation requests

2017-11-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273354#comment-16273354
 ] 

Chen Liang commented on HDFS-12838:
---

Thanks for working on this [~msingh]! I think this is a very good improvement. 
Looks pretty good to me overall. Just one comment, seems to me when 
{{KeyManagerImpl#openKey}} passes in a {{requestedSize}} of 0, then it ends up 
making an unnecessary call that does nothing. Maybe we should check and skip 
this case, also for better clearance of code.

> Ozone: Optimize number of allocated block rpc by aggregating multiple block 
> allocation requests
> ---
>
> Key: HDFS-12838
> URL: https://issues.apache.org/jira/browse/HDFS-12838
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12838-HDFS-7240.001.patch, 
> HDFS-12838-HDFS-7240.002.patch, HDFS-12838-HDFS-7240.003.patch
>
>
> Currently KeySpaceManager allocates multiple blocks by sending multiple block 
> allocation requests over the RPC. This can be optimized to aggregate multiple 
> block allocation request over one rpc.
> {code}
>   while (requestedSize > 0) {
> long allocateSize = Math.min(scmBlockSize, requestedSize);
> AllocatedBlock allocatedBlock =
> scmBlockClient.allocateBlock(allocateSize, type, factor);
> KsmKeyLocationInfo subKeyInfo = new KsmKeyLocationInfo.Builder()
> .setContainerName(allocatedBlock.getPipeline().getContainerName())
> .setBlockID(allocatedBlock.getKey())
> .setShouldCreateContainer(allocatedBlock.getCreateContainer())
> .setIndex(idx++)
> .setLength(allocateSize)
> .setOffset(0)
> .build();
> locations.add(subKeyInfo);
> requestedSize -= allocateSize;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273353#comment-16273353
 ] 

Raeanne J Marks commented on HDFS-12873:


Interestingly, this is not reproducible with WebHDFS if WebHDFS is used to make 
that bad path:

{code}
rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' -X PUT 
'http://172.18.0.2:50070/webhdfs/v1/x/y/z?op=MKDIRS=hdfs'
{"boolean":true}200⏎


 
 
rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y?op=GETFILESTATUS'
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16587,"group":"supergroup","length":0,"modificationTime":1512068214198,"owner":"hdfs","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200⏎
  

rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' -X PUT 
'http://172.18.0.2:50070/webhdfs/v1/.reserved/.inodes/16587/z/../foo?op=MKDIRS=hdfs'
{"boolean":true}200⏎



  
rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y/z?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[]}}200⏎  




rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16589,"group":"supergroup","length":0,"modificationTime":1512068246054,"owner":"hdfs","pathSuffix":"foo","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16588,"group":"supergroup","length":0,"modificationTime":1512068214198,"owner":"hdfs","pathSuffix":"z","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}200⏎
{code}

However, if I run the problematic {{Mkdirs}} with our snakebite-like client 
then try to list with WebHDFS, the problem is once again exposed:
{code}
rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16592,"group":"supergroup","length":0,"modificationTime":1512073040206,"owner":"hdfs","pathSuffix":"z","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}200⏎ 

rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y/z?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16593,"group":"supergroup","length":0,"modificationTime":1512073040207,"owner":"hdfs","pathSuffix":"..","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}200⏎

rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y/z/..?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16592,"group":"supergroup","length":0,"modificationTime":1512073040206,"owner":"hdfs","pathSuffix":"z","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}200⏎   

rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/x/y/z/../foo?op=GETFILESTATUS'
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
 does not exist: /x/y/foo"}}404⏎   

rmarks@rmarks-wkstn ~> curl -sS -L -w '%{http_code}' 
'http://172.18.0.2:50070/webhdfs/v1/.reserved/.inodes/16593?op=LISTSTATUS'
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16594,"group":"supergroup","length":0,"modificationTime":1512073040207,"owner":"hdfs","pathSuffix":"foo","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}200⏎   
{code}

This shows there is no {{foo}} under {{y}}, {{..}} is visible under {{z}}, and 
{{foo}} is inaccessible except by using an inode path to access {{/x/y/z/..}}.


> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: 

[jira] [Updated] (HDFS-12594) snapshotDiff fails if the report exceeds the RPC response limit

2017-11-30 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-12594:
---
   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks a lot for your hard works, Shashikant!

> snapshotDiff fails if the report exceeds the RPC response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: 3.1.0
>
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> HDFS-12594.009.patch, HDFS-12594.010.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-30 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-12594:
---
Hadoop Flags: Reviewed

+1 the 010 patch looks good.

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, HDFS-12594.008.patch, 
> HDFS-12594.009.patch, HDFS-12594.010.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12051) Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory

2017-11-30 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273322#comment-16273322
 ] 

Misha Dmitriev commented on HDFS-12051:
---

Thank you for the review [~yzhangal] Here are the answers to your questions:

{quote}1. The original NameCache works like this, when loading fsimage, it put 
names into a transient cache and remember the counts of each name, if the count 
of a name reachs a threshold (configurable with default 10), it promote the 
name to the permanent cache. After fsimage is loaded, it will clean up the 
transient cache and freeze the final cache. The problem described here is about 
calculating snapshotdiff which happens after fsimage loading. Thus any new 
name, even if it appears many times, would not benefit from the NameCache. 
Let's call this solution1, your change is to always allow the cache to be 
updated, and let's call it solution2.{quote}

I would say that this solution may (or does) have problems even during fsimage 
loading. What if fsimage contains a very high number of different names? Then 
NameCache may grow to several million entries, and a java.util.HashMap of this 
size is very costly to operate, because, for one thing, a separate 
HashMap$Entry object is created for each key-value pair. This would become a 
big burden on the GC and a big overhead in terms of CPU cycles.

{quote}2. If we modify solution1 to keep updating the cache instead of freezing 
it, we have chance to help the problem to solve here, however, depending on the 
threshold, the number of entries in the final cache of solution1 can be very 
different, thus memory footprint can be very different.{quote}

See above, and also see my initial explanation in this ticket. If we allow the 
cache in solution1 to grow, its memory footprint may grow comparable to the 
memory savings that it achieves, while creating a lot of additional GC pressure 
as explained above.

{quote}3. The cache size to be configured in solution2 would impact the final 
memory footprint too. If it's configured too small, we might end up many 
duplicates too. So having a reasonable default configuration would be 
important. It's so internal that we may not easily make good recommendation to 
users when to adjust it.{quote}

Yes. Ideally, more measurements need to be done and a better algorithm for 
selecting its size should be devised. But let's do it incrementally. Right now, 
this cache consumes very little extra memory yet saves quite a lot of it. This 
is much better than what we had before.

{quote}4. How much memory we are saving when saying "8.5% reduction"?{quote}

In this particular case, see above for the two lines explaining the overhead of 
duplicate byte[] arrays from jxray memory reports before and after the change:

Types of duplicate objects:
 Ovhd Num objs  Num unique objs   Class name

Before:
346,198K (12.6%)   12097893  3714559 byte[]
After
100,440K (3.9%)   6208877  3855398 byte[]

So, the overhead went down from ~333MB to ~100MB in this synthetic workload 
example. Note that in the original, production heap dump that I started with, 
the heap size and the overhead of duplicate byte[] arrays is much higher ~3GB:

3,220,272K (6.5%)   104749528  25760871 byte[]

{quote}
6. Solution2 might benefit some cases, but make other cases worse. If we decide 
to proceed, wonder if we can make both solution1 and solution2 available, and 
make it switchable when needed.
{quote}

For the use cases that I considered, it was a clear net benefit. I've also 
explained the very real problems with solution 1 in the answer to your question 
(1): it may create a lot of extra memory pressure, especially when the data is 
unscewed (i.e. the size of the resulting cache is comparable to the total 
number of names). And if a limit is put on the size of name cache in solution 
1, it will have the same drawback as solution 2, while still requiring more 
memory.

{quote}7. Suggest to add more comments in code. For example. for (int 
colsnChainLen = 0; colsnChainLen < 5; colsnChainLen++) {, what this does, and 
why "5".{quote}

Certainly, will do.

> Intern INOdeFileAttributes$SnapshotCopy.name byte[] arrays to save memory
> -
>
> Key: HDFS-12051
> URL: https://issues.apache.org/jira/browse/HDFS-12051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Analyzing one heap 
> dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays 
> result in 6.5% memory overhead, and most of these 

[jira] [Comment Edited] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273296#comment-16273296
 ] 

Konstantin Shvachko edited comment on HDFS-12638 at 11/30/17 7:57 PM:
--

We digged with Erik through a lot of code and history. I agree that 002 patch 
fixes the problem without reverting HDFS-9754. The culprit that caused the NPE 
is this change from HDFS-9754:
{code}
-BlockCollection bc = getBlockCollection(block);
-if (bc == null
-|| (bc.isUnderConstruction() && block.equals(bc.getLastBlock( {
+// skip abandoned block or block reopened for append
+if (block.isDeleted() || !block.isCompleteOrCommitted()) {
{code}
Since truncateBlock is not marked as deleted {{validateReconstructionWork()}} 
does not return inside if, but rather proceeds. It then hits the NPE as block 
collections is null since the corresponding INode was deleted.

I am attaching new patch, which combines 002, and slightly modified test case 
from [~yangjiandan]'s 001 patch.
Please review.


was (Author: shv):
We digged with Erik through a lot of code and history. I agree that 002 patch 
fixes the problem without reverting HDFS-9754. The culprit that caused the NPE 
is this change from HDFS-9754:
{code}
-BlockCollection bc = getBlockCollection(block);
-if (bc == null
-|| (bc.isUnderConstruction() && block.equals(bc.getLastBlock( {
+// skip abandoned block or block reopened for append
+if (block.isDeleted() || !block.isCompleteOrCommitted()) {
{code}
Since truncateBlock is not marked as deleted {{validateReconstructionWork()}} 
does not go into return inside if, but rather proceeds. It then hits the NPE as 
block collections is null since the corresponding INode was deleted.

I am attaching new patch, which combines 002, and slightly modified test case 
from [~yangjiandan]'s 001 patch.
Please review.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Status: Patch Available  (was: Open)

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-30 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12638:
---
Attachment: HDFS-12638.003.patch

We digged with Erik through a lot of code and history. I agree that 002 patch 
fixes the problem without reverting HDFS-9754. The culprit that caused the NPE 
is this change from HDFS-9754:
{code}
-BlockCollection bc = getBlockCollection(block);
-if (bc == null
-|| (bc.isUnderConstruction() && block.equals(bc.getLastBlock( {
+// skip abandoned block or block reopened for append
+if (block.isDeleted() || !block.isCompleteOrCommitted()) {
{code}
Since truncateBlock is not marked as deleted {{validateReconstructionWork()}} 
does not go into return inside if, but rather proceeds. It then hits the NPE as 
block collections is null since the corresponding INode was deleted.

I am attaching new patch, which combines 002, and slightly modified test case 
from [~yangjiandan]'s 001 patch.
Please review.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Blocker
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> HDFS-12638.003.patch, OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12867) Ozone: TestOzoneConfigurationFields fails consistently

2017-11-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12867:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the contribution and all for the reviews. I've committed 
the patch to the feature branch. 

> Ozone: TestOzoneConfigurationFields fails consistently
> --
>
> Key: HDFS-12867
> URL: https://issues.apache.org/jira/browse/HDFS-12867
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12867-HDFS-7240.001.patch, 
> HDFS-12867-HDFS-7240.002.patch
>
>
> The unit test TestOzoneConfigurationFields fails consistently because of 2 
> config entries missing in ozone-default file. The stack trace:
> {noformat}
> java.lang.AssertionError: class org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.ksm.KSMConfigKeys class 
> org.apache.hadoop.cblock.CBlockConfigKeys has 2 variables missing in 
> ozone-default.xml Entries:   ozone.rest.client.port  ozone.rest.servers 
> expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml(TestConfigurationFieldsBase.java:493)
> {noformat}
> The config {{ozone.rest.client.port}}, {{ozone.rest.servers}} were introduced 
> in HDFS-12549 but missing to documented. This leads the failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12660) Enable async edit logging test cases in TestFailureToReadEdits

2017-11-30 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273253#comment-16273253
 ] 

Ajay Kumar commented on HDFS-12660:
---

Not able to reproduce failure. Run it multiple times in loop.

> Enable async edit logging test cases in TestFailureToReadEdits
> --
>
> Key: HDFS-12660
> URL: https://issues.apache.org/jira/browse/HDFS-12660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Andrew Wang
>
> Per discussion in HDFS-12603, this test is failing occasionally due to 
> mysterious mocking issues. Let's try and fix them in this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-30 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273248#comment-16273248
 ] 

Chris Douglas commented on HDFS-11640:
--

+1 on the approach, though. It looks like resource exhaustion is responsible 
for the Jenkins failures.

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch, HDFS-11640-HDFS-9806.003.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12876) Ozone: moving NodeType from OzoneConsts to Ozone.proto

2017-11-30 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273229#comment-16273229
 ] 

Xiaoyu Yao commented on HDFS-12876:
---

Thanks [~nandakumar131] for reporting the issue and posting the fix. The patch 
looks good to me.
+1, pending Jenkins.

> Ozone: moving NodeType from OzoneConsts to Ozone.proto
> --
>
> Key: HDFS-12876
> URL: https://issues.apache.org/jira/browse/HDFS-12876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12876-HDFS-7240.000.patch
>
>
> Since we will be using {{NodeType}} in Service Discovery API - HDFS-12868, 
> it's better to have the enum in Ozone.proto than OzoneConsts. We need 
> {{NodeType}} in protobuf messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12877:
-
Attachment: HDFS-12877.02.patch

Thanks for taking a look [~vagarychen]. The failures are related. I followed 
the precedent for {{open(Path)}} and added this overload to the ignore list of 
both tests.

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch, 
> HDFS-12877.02.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273190#comment-16273190
 ] 

Chen Liang commented on HDFS-12877:
---

The changes looks to me, but the failed tests seem related. Could you please 
take a look?

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-11-30 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273065#comment-16273065
 ] 

Lukas Majercak edited comment on HDFS-11576 at 11/30/17 7:05 PM:
-

Added patch013 to fix find-bugs


was (Author: lukmajercak):
Fix find-bugs

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, 
> HDFS-11576.012.patch, HDFS-11576.013.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12665:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch, 
> HDFS-12665-HDFS-9806.006.patch, HDFS-12665-HDFS-9806.007.patch, 
> HDFS-12665-HDFS-9806.008.patch, HDFS-12665-HDFS-9806.009.patch, 
> HDFS-12665-HDFS-9806.010.patch, HDFS-12665-HDFS-9806.011.patch, 
> HDFS-12665-HDFS-9806.012.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-11-30 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273150#comment-16273150
 ] 

Virajith Jalaparti commented on HDFS-12665:
---

Thanks [~ehiggs]. Committed this to HDFS-9806 branch.

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch, 
> HDFS-12665-HDFS-9806.006.patch, HDFS-12665-HDFS-9806.007.patch, 
> HDFS-12665-HDFS-9806.008.patch, HDFS-12665-HDFS-9806.009.patch, 
> HDFS-12665-HDFS-9806.010.patch, HDFS-12665-HDFS-9806.011.patch, 
> HDFS-12665-HDFS-9806.012.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-30 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12685:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, 
> HDFS-12685-HDFS-9806.004.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at java.io.File.(File.java:421) 
>

[jira] [Comment Edited] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-30 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273149#comment-16273149
 ] 

Virajith Jalaparti edited comment on HDFS-12685 at 11/30/17 6:45 PM:
-

Thanks for taking a look [~ehiggs]. I committed v4 to the feature branch. The 
failed tests are unrelated.


was (Author: virajith):
Thanks for taking a look [~ehiggs]. I committed v4 to the feature branch.

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, 
> HDFS-12685-HDFS-9806.004.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>

[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-30 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273149#comment-16273149
 ] 

Virajith Jalaparti commented on HDFS-12685:
---

Thanks for taking a look [~ehiggs]. I committed v4 to the feature branch.

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, 
> HDFS-12685-HDFS-9806.004.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"   
>   
>   
> 
> at 

[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273122#comment-16273122
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 6:33 PM:
--

[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{code}
# hadoop version
Hadoop 2.8.0
{code}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{code}
 >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
 >> auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
>> bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}
{code}




was (Author: raemarks):
[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{code}
# hadoop version
Hadoop 2.8.0
{code}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{code}
 >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
 >> auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}
{code}



> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> 

[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273122#comment-16273122
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 6:31 PM:
--

[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{code}
# hadoop version
Hadoop 2.8.0
{code}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{{ >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
} }}




was (Author: raemarks):
[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{{# hadoop version
Hadoop 2.8.0}}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{{ >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
} }}



> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on 

[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273122#comment-16273122
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 6:31 PM:
--

[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{code}
# hadoop version
Hadoop 2.8.0
{code}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{code}
 >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
 >> auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}
{code}




was (Author: raemarks):
[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{code}
# hadoop version
Hadoop 2.8.0
{code}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{{ >> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
} }}



> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an 

[jira] [Commented] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273122#comment-16273122
 ] 

Raeanne J Marks commented on HDFS-12873:


[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:
{{# hadoop version
Hadoop 2.8.0}}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:
{{>> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}}}



> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on '/.reserved/.inodes/' 
> shows '..', while GetListing on '/x/y' does not.
> Mkdirs INotify events were reported with the following paths, in order:
> /x
> /x/y
> /x/y/z
> /x/y/z/..
> /x/y/z/../foo
> I can also chain these dotdot directories and make them as deep as I want. 
> Mkdirs works with the following paths appended to the inode path for 
> directory y: '/z/../../../foo', '/z/../../../../../', 
> '/z/../../../foo/bar/../..' etc, and it constructs all the '..' directories 
> as if they weren't special names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12873) Creating a '..' directory is possible using inode paths

2017-11-30 Thread Raeanne J Marks (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273122#comment-16273122
 ] 

Raeanne J Marks edited comment on HDFS-12873 at 11/30/17 6:29 PM:
--

[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:

{{# hadoop version
Hadoop 2.8.0}}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:

{{>> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}}}




was (Author: raemarks):
[~shahrs87] By it worked as expected, do you mean it compressed the path to 
{{/user/rushabhs/benchmarks/test2}}, or threw an exception? I can confirm I'm 
seeing this in {{2.8.0}}:
{{# hadoop version
Hadoop 2.8.0}}

I am using a snakebite-like hadoop client. It does not perform any client-side 
path manipulation/validation - it relies on the NameNode to properly 
handle/report unsupported path formats. Here is my script:
{{>> client = pydoofus.namenode.v9.Client('172.18.0.2', 8020, 
auth={'effective_user': 'hdfs'})
>> print client.mkdirs('/x/y/z', 0777, create_parent=True)
True
>> file_status = client.get_file_info('/x/y')
>> print file_status
FileStatus {
file_type: DIRECTORY
path: 
length: 0
permission: 
Permission {
mode: 0777
}
owner: hdfs
group: supergroup
modification_time: 1512066397076
access_time: 0
symlink: 
replication: 0
block_size: 0
locations: None
file_id: 16580
num_children: 1
}
bad_path = '/.reserved/.inodes/' + str(file_status.file_id) + '/z/../foo'
>> print client.mkdirs(bad_path, 0777, create_parent=True)
True
>> print client.get_listing('/x/y/z')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'..', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16582L, num_children=1)]
remaining: 0
}
>> print client.get_listing('/x/y/z/..')
DirectoryListing {
entries: [FileStatus(file_type='DIRECTORY', path=u'foo', length=0L, 
permission=Permission(mode=0777), owner=u'hdfs', group=u'supergroup', 
modification_time=1512066397083, access_time=0, symlink='', replication=0, 
block_size=0L, locations=None, file_id=16583L, num_children=0)]
remaining: 0
}}}



> Creating a '..' directory is possible using inode paths
> ---
>
> Key: HDFS-12873
> URL: https://issues.apache.org/jira/browse/HDFS-12873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.8.0
> Environment: Apache NameNode running in a Docker container on a 
> Fedora 25 workstation.
>Reporter: Raeanne J Marks
>
> Start with a fresh deployment of HDFS.
> 1. Mkdirs '/x/y/z'
> 2. use GetFileInfo to get y's inode number
> 3. Mkdirs '/.reserved/.inodes//z/../foo'
> Expectation: The path in step 3 is rejected as invalid (exception thrown) OR 
> foo would be created under y.
> Observation: This created a directory called '..' under z and 'foo' under 
> that '..' directory instead of consolidating the path to '/x/y/foo' or 
> throwing an exception. GetListing on 

[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273115#comment-16273115
 ] 

genericqa commented on HDFS-12877:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.TestFilterFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900049/HDFS-12877.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 377a8dc43493 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 75a3ab8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22232/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22232/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| 

[jira] [Commented] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273114#comment-16273114
 ] 

genericqa commented on HDFS-12877:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.TestFilterFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900051/HDFS-12877.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 23b9039b9534 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 75a3ab8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22233/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22233/testReport/ |
| Max. process+thread count | 1436 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| 

[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval

2017-11-30 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-11576:
--
Attachment: HDFS-11576.013.patch

Fix find-bugs

> Block recovery will fail indefinitely if recovery time > heartbeat interval
> ---
>
> Key: HDFS-11576
> URL: https://issues.apache.org/jira/browse/HDFS-11576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs, namenode
>Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Critical
> Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, 
> HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.005.patch, 
> HDFS-11576.006.patch, HDFS-11576.007.patch, HDFS-11576.008.patch, 
> HDFS-11576.009.patch, HDFS-11576.010.patch, HDFS-11576.011.patch, 
> HDFS-11576.012.patch, HDFS-11576.013.patch, HDFS-11576.repro.patch
>
>
> Block recovery will fail indefinitely if the time to recover a block is 
> always longer than the heartbeat interval. Scenario:
> 1. DN sends heartbeat 
> 2. NN sends a recovery command to DN, recoveryID=X
> 3. DN starts recovery
> 4. DN sends another heartbeat
> 5. NN sends a recovery command to DN, recoveryID=X+1
> 6. DN calls commitBlockSyncronization after succeeding with first recovery to 
> NN, which fails because X < X+1
> ... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12872) EC Checksum broken when BlockAccessToken is enabled

2017-11-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273066#comment-16273066
 ] 

Andrew Wang commented on HDFS-12872:


If we can get it in before the other blockers are resolved, I'm fine with 
including this.

> EC Checksum broken when BlockAccessToken is enabled
> ---
>
> Key: HDFS-12872
> URL: https://issues.apache.org/jira/browse/HDFS-12872
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12872.repro.patch
>
>
> It appears {{hdfs ec -checksum}} doesn't work when block access token is 
> enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12876) Ozone: moving NodeType from OzoneConsts to Ozone.proto

2017-11-30 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12876:
---
Status: Patch Available  (was: Open)

> Ozone: moving NodeType from OzoneConsts to Ozone.proto
> --
>
> Key: HDFS-12876
> URL: https://issues.apache.org/jira/browse/HDFS-12876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12876-HDFS-7240.000.patch
>
>
> Since we will be using {{NodeType}} in Service Discovery API - HDFS-12868, 
> it's better to have the enum in Ozone.proto than OzoneConsts. We need 
> {{NodeType}} in protobuf messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-11-30 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273046#comment-16273046
 ] 

Nanda kumar commented on HDFS-12741:


Thanks [~shashikant] for working on this. Patch looks good to me, some minor 
comments apart from [~linyiqun]'s

* The format of lines 144 - 149 are changed in {{KeySpaceManager}} which is 
unnecessary.

* {{KeySpaceManager#parseArguments}} can be refactored to below code
{code}
private static StartupOption parseArguments(String[] args) {
  if (args == null || args.length == 0) {
return StartupOption.REGULAR;
  } else if (args.length == 1) {
try {
  return StartupOption.valueOf(args[0]);
} catch (IllegalArgumentException iae) {
  return null;
}
  }
  return null;
}
{code}

* In {{KeySpaceManager#ksmInit}} when StorageState is INITIALIZED and it 
doesn't match with SCM's clusterId and scmId, we should not update/overwrite 
the version file. In this case, we can print an error message and exit.

* In KSMStorage#setScmId and KSMStorage#setKsmId before setting the value make 
sure that the storage state is not INITIALIZED

* KSMStorage.SCM_ID - we don't have to define it again, we can use 
SCMStorage.SCM_ID directly in KSMStorage

> ADD support for KSM --createObjectStore command
> ---
>
> Key: HDFS-12741
> URL: https://issues.apache.org/jira/browse/HDFS-12741
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12741-HDFS-7240.001.patch, 
> HDFS-12741-HDFS-7240.002.patch
>
>
> KSM --createObjectStore command reads the ozone configuration information and 
> creates the KSM version file and reads the SCM version file from the SCM 
> specified.
>   
> The SCM version file is stored in the KSM metadata directory and before 
> communicating with an SCM KSM verifies that it is communicating with an SCM 
> where the relationship has been established via createObjectStore command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12872) EC Checksum broken when BlockAccessToken is enabled

2017-11-30 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273029#comment-16273029
 ] 

Xiao Chen commented on HDFS-12872:
--

Ping [~andrew.wang], I'm hoping to include this in 3.0.0 too. Perhaps not a 
blocker but still fundamental.

> EC Checksum broken when BlockAccessToken is enabled
> ---
>
> Key: HDFS-12872
> URL: https://issues.apache.org/jira/browse/HDFS-12872
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12872.repro.patch
>
>
> It appears {{hdfs ec -checksum}} doesn't work when block access token is 
> enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12878) Add CryptoOutputStream to WebHdfsFileSystem append call.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-12878:
-

Assignee: Rushabh S Shah

> Add CryptoOutputStream to WebHdfsFileSystem append call.
> 
>
> Key: HDFS-12878
> URL: https://issues.apache.org/jira/browse/HDFS-12878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12597) Add CryptoOutputStream to WebHdfsFileSystem create call.

2017-11-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12597:
--
Summary: Add CryptoOutputStream to WebHdfsFileSystem create call.  (was: 
Add CryptoOutputStream to WebHdfsFileSystem write calls.)

> Add CryptoOutputStream to WebHdfsFileSystem create call.
> 
>
> Key: HDFS-12597
> URL: https://issues.apache.org/jira/browse/HDFS-12597
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12878) Add CryptoOutputStream to WebHdfsFileSystem append call.

2017-11-30 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12878:
-

 Summary: Add CryptoOutputStream to WebHdfsFileSystem append call.
 Key: HDFS-12878
 URL: https://issues.apache.org/jira/browse/HDFS-12878
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rushabh S Shah






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12875) RBF: Complete logic for -readonly option of dfsrouteradmin add command

2017-11-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272980#comment-16272980
 ] 

Íñigo Goiri commented on HDFS-12875:


[~linyiqun], actually the logic we have internally for this is a mount table 
where we can only read and not write.
Basically, in {{RouterRpcServer#getLocationsForPath}}, we have logic to control 
read only and locked paths:
{code}
  // We may block some write operations
  if (opCategory.get() == OperationCategory.WRITE) {
// Check if the path is in a read only mount point
if (isPathReadOnly(path)) {
  if (this.rpcMonitor != null) {
this.rpcMonitor.routerFailureReadOnly();
  }
  throw new IOException(path + " is in a read only mount point");
}
// Check if the path is locked
if (isPathLocked(path)) {
  if (this.rpcMonitor != null) {
this.rpcMonitor.routerFailureLocked();
  }
  throw new IOException(path + " is locked");
}
  }
{code}

Which uses:
{code}
  /**
   * Check if a path is in a read only mount point.
   *
   * @param path Path to check.
   * @return If the path is in a read only mount point.
   */
  private boolean isPathReadOnly(final String path) {
if (subclusterResolver instanceof MountTableResolver) {
  try {
MountTableResolver mountTable = (MountTableResolver)subclusterResolver;
MountTable entry = mountTable.getMountPoint(path);
if (entry != null && entry.isReadOnly()) {
  return true;
}
  } catch (IOException e) {
LOG.error("Cannot get mount point: {}", e.getMessage());
  }
}
return false;
  }

  /**
   * Check if the path is locked.
   *
   * @param path Path to check.
   * @return If the path is locked.
   * @throws IOException If the State Store is not available.
   */
  private boolean isPathLocked(final String path) throws IOException {
return this.pathLockStore != null && this.pathLockStore.isLocked(path);
  }
{code}

The use case you propose is more like ACLs in the Mount Table management.
I think that is valuable.
What about adding full ACLs to the Mount Table management and other 
dfsrouteradmin options in this JIRA and I file another one for the the read 
only mount entries?

> RBF: Complete logic for -readonly option of dfsrouteradmin add command
> --
>
> Key: HDFS-12875
> URL: https://issues.apache.org/jira/browse/HDFS-12875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12875.001.patch
>
>
> Currently the option -readonly of command {{dfsrouteradmin -add}} doesn't 
> make any sense.The desired behavior is that read-only mount table that be set 
> in add command cannot be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12868) Ozone: Service Discovery API

2017-11-30 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12868:

Description: 
qCurrently if a client wants to connect to Ozone cluster we need multiple 
properties to be configured in the client.

For RPC based connection we need
{{ozone.ksm.address}}
{{ozone.scm.client.address}}
and the ports if something other than default is configured.

For REST based connection
{{ozone.rest.servers}}
and port if something other than default is configured.

With the introduction of Service Discovery API the client should be able to 
discover all the configurations needed for the connection. Service discovery 
calls will be handled by KSM, at the client side, we only need to configure 
{{ozone.ksm.address}}. The client should first connect to KSM and get all the 
required configurations.


  was:
Currently if a client wants to connect to Ozone cluster we need multiple 
properties to be configured in the client.

For RPC based connection we need
{{ozone.ksm.address}}
{{ozone.scm.client.address}}
and the ports if something other than default is configured.

For REST based connection
{{ozone.rest.servers}}
and port if something other than default is configured.

With the introduction of Service Discovery API the client should be able to 
discover all the configurations needed for the connection. Service discovery 
calls will be handled by KSM, at the client side, we only need to configure 
{{ozone.ksm.address}}. The client should first connect to KSM and get all the 
required configurations.



> Ozone: Service Discovery API
> 
>
> Key: HDFS-12868
> URL: https://issues.apache.org/jira/browse/HDFS-12868
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>
> qCurrently if a client wants to connect to Ozone cluster we need multiple 
> properties to be configured in the client.
> For RPC based connection we need
> {{ozone.ksm.address}}
> {{ozone.scm.client.address}}
> and the ports if something other than default is configured.
> For REST based connection
> {{ozone.rest.servers}}
> and port if something other than default is configured.
> With the introduction of Service Discovery API the client should be able to 
> discover all the configurations needed for the connection. Service discovery 
> calls will be handled by KSM, at the client side, we only need to configure 
> {{ozone.ksm.address}}. The client should first connect to KSM and get all the 
> required configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12877:
-
Attachment: HDFS-12877.01.patch

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12877:
-
Attachment: (was: HDFS-12877.01.patch)

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12877) Add open(PathHandle) with default buffersize

2017-11-30 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12877:
-
Attachment: HDFS-12877.01.patch

Fix javadoc reference

> Add open(PathHandle) with default buffersize
> 
>
> Key: HDFS-12877
> URL: https://issues.apache.org/jira/browse/HDFS-12877
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Trivial
> Attachments: HDFS-12877.00.patch, HDFS-12877.01.patch
>
>
> HDFS-7878 added an overload for {{FileSystem::open}} that requires the user 
> to provide a buffer size when opening by {{PathHandle}}. Similar to 
> {{open(Path)}}, it'd be convenient to have another overload that takes the 
> default from the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >