[jira] [Updated] (HDFS-13349) Unresolved merge conflict in ViewFs.md

2018-03-26 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-13349:
-
Description: 
A backport to 3.0.1 has an unresolved conflict in ViewFs.md change 



{code}
commit 9264f10bb35dbe30c75c648bf759e8aeb715834a
 Author: Anu Engineer 
 Date: Tue Feb 6 13:43:45 2018 -0800

HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by Xiao 
Chen.

(cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f)

Conflicts:
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java
{code}


  was:
A backport to 3.0.1 has an unresolved conflict in ViewFs.md change 

```

commit 9264f10bb35dbe30c75c648bf759e8aeb715834a
Author: Anu Engineer 
Date: Tue Feb 6 13:43:45 2018 -0800

HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by Xiao 
Chen.

(cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f)

Conflicts:
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java

```


> Unresolved merge conflict in ViewFs.md 
> ---
>
> Key: HDFS-13349
> URL: https://issues.apache.org/jira/browse/HDFS-13349
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.1
>Reporter: Gera Shegalov
>Priority: Blocker
>
> A backport to 3.0.1 has an unresolved conflict in ViewFs.md change 
> {code}
> commit 9264f10bb35dbe30c75c648bf759e8aeb715834a
>  Author: Anu Engineer 
>  Date: Tue Feb 6 13:43:45 2018 -0800
> HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by 
> Xiao Chen.
> (cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f)
> Conflicts:
>  hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
>  
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13349) Unresolved merge conflict in ViewFs.md

2018-03-26 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-13349:


 Summary: Unresolved merge conflict in ViewFs.md 
 Key: HDFS-13349
 URL: https://issues.apache.org/jira/browse/HDFS-13349
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.1
Reporter: Gera Shegalov


A backport to 3.0.1 has an unresolved conflict in ViewFs.md change 

```

commit 9264f10bb35dbe30c75c648bf759e8aeb715834a
Author: Anu Engineer 
Date: Tue Feb 6 13:43:45 2018 -0800

HDFS-12990. Change default NameNode RPC port back to 8020. Contributed by Xiao 
Chen.

(cherry picked from commit 4304fcd5bdf9fb7aa9181e866eea383f89bf171f)

Conflicts:
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java

```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6244) Make Trash Interval configurable for each of the namespaces

2015-08-18 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701422#comment-14701422
 ] 

Gera Shegalov commented on HDFS-6244:
-

The point of this change is to make Trash namespace-aware. I'd rather do it 
explicitly. A few calls to DFSUtil#getNamenodeNameServiceId should be better 
than conf cloning. 

 Make Trash Interval configurable for each of the namespaces
 ---

 Key: HDFS-6244
 URL: https://issues.apache.org/jira/browse/HDFS-6244
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
  Labels: BB2015-05-TBR
 Attachments: HDFS-6244.v1.patch, HDFS-6244.v2.patch, 
 HDFS-6244.v3.patch, HDFS-6244.v4.patch, HDFS-6244.v5.patch


 Somehow we need to avoid the cluster filling up.
 One solution is to have a different trash policy per namespace. However, if 
 we can simply make the property configurable per namespace, then the same 
 config can be rolled everywhere and we'd be done. This seems simple enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6244) Make Trash Interval configurable for each of the namespaces

2015-08-17 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14700029#comment-14700029
 ] 

Gera Shegalov commented on HDFS-6244:
-

Thanks for the patch, [~l201514]. My thinking is basically inline with 
[~mingma]. I suggest not to clone conf and set the global property in NameNode. 
I suggest that you consistently read the namespace specific property with the 
global property value as default.

 Make Trash Interval configurable for each of the namespaces
 ---

 Key: HDFS-6244
 URL: https://issues.apache.org/jira/browse/HDFS-6244
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
  Labels: BB2015-05-TBR
 Attachments: HDFS-6244.v1.patch, HDFS-6244.v2.patch, 
 HDFS-6244.v3.patch, HDFS-6244.v4.patch, HDFS-6244.v5.patch


 Somehow we need to avoid the cluster filling up.
 One solution is to have a different trash policy per namespace. However, if 
 we can simply make the property configurable per namespace, then the same 
 config can be rolled everywhere and we'd be done. This seems simple enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6244) Make Trash Interval configurable for each of the namespaces

2015-08-17 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6244:

Status: Open  (was: Patch Available)

 Make Trash Interval configurable for each of the namespaces
 ---

 Key: HDFS-6244
 URL: https://issues.apache.org/jira/browse/HDFS-6244
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Assignee: Siqi Li
  Labels: BB2015-05-TBR
 Attachments: HDFS-6244.v1.patch, HDFS-6244.v2.patch, 
 HDFS-6244.v3.patch, HDFS-6244.v4.patch, HDFS-6244.v5.patch


 Somehow we need to avoid the cluster filling up.
 One solution is to have a different trash policy per namespace. However, if 
 we can simply make the property configurable per namespace, then the same 
 config can be rolled everywhere and we'd be done. This seems simple enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8182) Implement topology-aware CDN-style caching

2015-04-24 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14511936#comment-14511936
 ] 

Gera Shegalov commented on HDFS-8182:
-

Thanks for comments [~andrew.wang] and [~jlowe] 

bq. We might get some hotspotting, since the first reader on a rack will 
localize everything. Probably still random enough though?
I agree there is a hotspotting risk, hopefully indeed low. But if turns out to 
be big deal we can introduce random node picking in the rack.

Agree that we need to clarify flag names and their semantics (RE: DONTNEED)

And the issue of quota/accounting is something I haven't thought of yet either. 
Thanks for bringing it up.

 Implement topology-aware CDN-style caching
 --

 Key: HDFS-8182
 URL: https://issues.apache.org/jira/browse/HDFS-8182
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov

 To scale reads of hot blocks in large clusters, it would be beneficial if we 
 could read a block across the ToR switches only once. Example scenarios are 
 localization of binaries, MR distributed cache files for map-side joins and 
 similar. There are multiple layers where this could be implemented (YARN 
 service or individual apps such as MR) but I believe it is best done in HDFS 
 or even common FileSystem to support as many use cases as possible. 
 The life cycle could look like this e.g. for the YARN localization scenario:
 1. inputStream = fs.open(path, ..., CACHE_IN_RACK)
 2. instead of reading from a remote DN directly, NN tells the client to read 
 via the local DN1 and the DN1 creates a replica of each block.
 When the next localizer on DN2 in the same rack starts it will learn from NN 
 about the replica in DN1 and the client will read from DN1 using the 
 conventional path.
 When the application ends the AM or NM's can instruct the NN in a fadvise 
 DONTNEED style, it can start telling DN's to discard extraneous replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8182) Implement topology-aware CDN-style caching

2015-04-21 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1450#comment-1450
 ] 

Gera Shegalov commented on HDFS-8182:
-

Hi Andrew,

I think the said block placement policy works fine for data whose usage we know 
a priori such as binaries in YARN-1492 Shared Cache (few terabytes in our 
case), MR/Spark staging directories, etc. For such cases we/frameworks already 
set a high replication factor. And the solution with rf=#racks is already good 
enough. Except for the replication speed vs YARN scheduling race, which would 
be eliminated with the approach proposed in this JIRA. 

In some cases we have no a priori knowledge. The most prominent ones are some 
primary or temporary files are used as the build input of a hash join in an 
ad-hoc manner. Having a solution that works transparently irrespective of 
specified replication factor is a win.

Another drawback of a block-placement based solution (besides currently being 
global, not per file) is that it's not elastic, and is oblivious of the data 
temperature. I think this JIRA would cover both families of cases above well.

 Implement topology-aware CDN-style caching
 --

 Key: HDFS-8182
 URL: https://issues.apache.org/jira/browse/HDFS-8182
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov

 To scale reads of hot blocks in large clusters, it would be beneficial if we 
 could read a block across the ToR switches only once. Example scenarios are 
 localization of binaries, MR distributed cache files for map-side joins and 
 similar. There are multiple layers where this could be implemented (YARN 
 service or individual apps such as MR) but I believe it is best done in HDFS 
 or even common FileSystem to support as many use cases as possible. 
 The life cycle could look like this e.g. for the YARN localization scenario:
 1. inputStream = fs.open(path, ..., CACHE_IN_RACK)
 2. instead of reading from a remote DN directly, NN tells the client to read 
 via the local DN1 and the DN1 creates a replica of each block.
 When the next localizer on DN2 in the same rack starts it will learn from NN 
 about the replica in DN1 and the client will read from DN1 using the 
 conventional path.
 When the application ends the AM or NM's can instruct the NN in a fadvise 
 DONTNEED style, it can start telling DN's to discard extraneous replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8182) Implement topology-aware CDN-style caching

2015-04-20 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503601#comment-14503601
 ] 

Gera Shegalov commented on HDFS-8182:
-

[~andrew.wang], [~zhz] thanks for your comments. 

bq. IIUC most apps also use the distributed cache, so there isn't too much code 
duplication that would be reduced by pushing this to HDFS.
This proposal is specifically motivated with the scalability issues with 
DistributedCache localization in a large cluster.

When we get to per-path block placement policy, this will be more acceptable 
than the current per-block-manager approach. However, it's still not as 
flexible as needed to indicate a temporary demand. Thanks for pointing out 
these JIRAs.

 Implement topology-aware CDN-style caching
 --

 Key: HDFS-8182
 URL: https://issues.apache.org/jira/browse/HDFS-8182
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov

 To scale reads of hot blocks in large clusters, it would be beneficial if we 
 could read a block across the ToR switches only once. Example scenarios are 
 localization of binaries, MR distributed cache files for map-side joins and 
 similar. There are multiple layers where this could be implemented (YARN 
 service or individual apps such as MR) but I believe it is best done in HDFS 
 or even common FileSystem to support as many use cases as possible. 
 The life cycle could look like this e.g. for the YARN localization scenario:
 1. inputStream = fs.open(path, ..., CACHE_IN_RACK)
 2. instead of reading from a remote DN directly, NN tells the client to read 
 via the local DN1 and the DN1 creates a replica of each block.
 When the next localizer on DN2 in the same rack starts it will learn from NN 
 about the replica in DN1 and the client will read from DN1 using the 
 conventional path.
 When the application ends the AM or NM's can instruct the NN in a fadvise 
 DONTNEED style, it can start telling DN's to discard extraneous replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8182) Implement topology-aware CDN-style caching

2015-04-19 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-8182:
---

 Summary: Implement topology-aware CDN-style caching
 Key: HDFS-8182
 URL: https://issues.apache.org/jira/browse/HDFS-8182
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov


To scale reads of hot blocks in large clusters, it would be beneficial if we 
could read a block across the ToR switches only once. Example scenarios are 
localization of binaries, MR distributed cache files for map-side joins and 
similar. There are multiple layers where this could be implemented (YARN 
service or individual apps such as MR) but I believe it is best done in HDFS or 
even common FileSystem to support as many use cases as possible. 

The life cycle could look like this e.g. for the YARN localization scenario:
1. inputStream = fs.open(path, ..., CACHE_IN_RACK)
2. instead of reading from a remote DN directly, NN tells the client to read 
via the local DN1 and the DN1 creates a replica of each block.

When the next localizer on DN2 in the same rack starts it will learn from NN 
about the replica in DN1 and the client will read from DN1 using the 
conventional path.

When the application ends the AM or NM's can instruct the NN in a fadvise 
DONTNEED style, it can start telling DN's to discard extraneous replica.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7789) DFSck should resolve the path to support cross-FS symlinks

2015-03-02 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7789:

Summary: DFSck should resolve the path to support cross-FS symlinks  (was: 
DFsck should resolve the path to support cross-FS symlinks)

 DFSck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7789) DFSck should resolve the path to support cross-FS symlinks

2015-03-02 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7789:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for review, [~lohit]! Committed to trunk and branch-2.

 DFSck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Fix For: 2.7.0

 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7789) DFsck should resolve the path to support cross-FS symlinks

2015-02-24 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14335182#comment-14335182
 ] 

Gera Shegalov commented on HDFS-7789:
-

[~lohit], can you review this patch?

 DFsck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-02-19 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7439:

Assignee: Takanobu Asanuma

 Add BlockOpResponseProto's message to DFSClient's exception message
 ---

 Key: HDFS-7439
 URL: https://issues.apache.org/jira/browse/HDFS-7439
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Takanobu Asanuma
Priority: Minor

 When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
 if DFSClient can add BlockOpResponseProto's message to the exception message 
 applications will get. For example, instead of
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp());
 {noformat}
 It could be,
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp()
 + , status message  + status.getMessage());
 {noformat}
 We might want to check out all the references to BlockOpResponseProto in 
 DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7789) DFsck should resolve the path to support cross-FS symlinks

2015-02-12 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-7789:
---

 Summary: DFsck should resolve the path to support cross-FS symlinks
 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


DFsck should resolve the specified path such that it can be used in with viewfs 
and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7789) DFsck should resolve the path to support cross-FS symlinks

2015-02-12 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7789:

Status: Patch Available  (was: Open)

 DFsck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7789) DFsck should resolve the path to support cross-FS symlinks

2015-02-12 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7789:

Attachment: HDFS-7789.001.patch

v1 for review.

 DFsck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7789) DFsck should resolve the path to support cross-FS symlinks

2015-02-12 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319121#comment-14319121
 ] 

Gera Shegalov commented on HDFS-7789:
-

TestPipelinesFailover failure is tracked by HDFS-7576

 DFsck should resolve the path to support cross-FS symlinks
 --

 Key: HDFS-7789
 URL: https://issues.apache.org/jira/browse/HDFS-7789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7789.001.patch


 DFsck should resolve the specified path such that it can be used in with 
 viewfs and other cross-filesystem symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7314) Aborted DFSClient's impact on long running service like YARN

2015-01-29 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14297266#comment-14297266
 ] 

Gera Shegalov commented on HDFS-7314:
-

Actually I need to take #1 back, I misspoke DFS#close calls super.close()
{code}
  @Override
  public void close() throws IOException {
try {
  dfs.closeOutputStreams(false);
  super.close();
} finally {
  dfs.close();
}
  }
{code}

So it's only about 2.  

 Aborted DFSClient's impact on long running service like YARN
 

 Key: HDFS-7314
 URL: https://issues.apache.org/jira/browse/HDFS-7314
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-7314-2.patch, HDFS-7314-3.patch, HDFS-7314-4.patch, 
 HDFS-7314-5.patch, HDFS-7314-6.patch, HDFS-7314-7.patch, HDFS-7314.patch


 It happened in YARN nodemanger scenario. But it could happen to any long 
 running service that use cached instance of DistrbutedFileSystem.
 1. Active NN is under heavy load. So it became unavailable for 10 minutes; 
 any DFSClient request will get ConnectTimeoutException.
 2. YARN nodemanager use DFSClient for certain write operation such as log 
 aggregator or shared cache in YARN-1492. DFSClient used by YARN NM's 
 renewLease RPC got ConnectTimeoutException.
 {noformat}
 2014-10-29 01:36:19,559 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
 renew lease for [DFSClient_NONMAPREDUCE_-550838118_1] for 372 seconds.  
 Aborting ...
 {noformat}
 3. After DFSClient is in Aborted state, YARN NM can't use that cached 
 instance of DistributedFileSystem.
 {noformat}
 2014-10-29 20:26:23,991 INFO 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
  Failed to download rsrc...
 java.io.IOException: Filesystem closed
 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:727)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1780)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
 at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:237)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:340)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:57)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 We can make YARN or DFSClient more tolerant to temporary NN unavailability. 
 Given the callstack is YARN - DistributedFileSystem - DFSClient, this can 
 be addressed at different layers.
 * YARN closes the DistributedFileSystem object when it receives some well 
 defined exception. Then the next HDFS call will create a new instance of 
 DistributedFileSystem. We have to fix all the places in YARN. Plus other HDFS 
 applications need to address this as well.
 * DistributedFileSystem detects Aborted DFSClient and create a new instance 
 of DFSClient. We will need to fix all the places DistributedFileSystem calls 
 DFSClient.
 * After DFSClient gets into Aborted state, it doesn't have to reject all 
 requests , instead it can retry. If NN is available again it can transition 
 to healthy state.
 Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7314) Aborted DFSClient's impact on long running service like YARN

2015-01-28 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296296#comment-14296296
 ] 

Gera Shegalov commented on HDFS-7314:
-

I think the real problem is that the {{FileSystem}}-level CACHE entry is not 
invalidated/evicted although the DFS Client is closed. 

# DistributedFileSystem#close does not call super.close() that would achieve 
this.
# DFSClient#abort does not close the wrapping DFS object nor DFS tries to 
intercept checkOpen to do this.

Solving these issues would solve the scenario described in the JIRA. What do 
you think, [~mingma]?

 Aborted DFSClient's impact on long running service like YARN
 

 Key: HDFS-7314
 URL: https://issues.apache.org/jira/browse/HDFS-7314
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-7314-2.patch, HDFS-7314-3.patch, HDFS-7314-4.patch, 
 HDFS-7314-5.patch, HDFS-7314-6.patch, HDFS-7314-7.patch, HDFS-7314.patch


 It happened in YARN nodemanger scenario. But it could happen to any long 
 running service that use cached instance of DistrbutedFileSystem.
 1. Active NN is under heavy load. So it became unavailable for 10 minutes; 
 any DFSClient request will get ConnectTimeoutException.
 2. YARN nodemanager use DFSClient for certain write operation such as log 
 aggregator or shared cache in YARN-1492. DFSClient used by YARN NM's 
 renewLease RPC got ConnectTimeoutException.
 {noformat}
 2014-10-29 01:36:19,559 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
 renew lease for [DFSClient_NONMAPREDUCE_-550838118_1] for 372 seconds.  
 Aborting ...
 {noformat}
 3. After DFSClient is in Aborted state, YARN NM can't use that cached 
 instance of DistributedFileSystem.
 {noformat}
 2014-10-29 20:26:23,991 INFO 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
  Failed to download rsrc...
 java.io.IOException: Filesystem closed
 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:727)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1780)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
 at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:237)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:340)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:57)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 We can make YARN or DFSClient more tolerant to temporary NN unavailability. 
 Given the callstack is YARN - DistributedFileSystem - DFSClient, this can 
 be addressed at different layers.
 * YARN closes the DistributedFileSystem object when it receives some well 
 defined exception. Then the next HDFS call will create a new instance of 
 DistributedFileSystem. We have to fix all the places in YARN. Plus other HDFS 
 applications need to address this as well.
 * DistributedFileSystem detects Aborted DFSClient and create a new instance 
 of DFSClient. We will need to fix all the places DistributedFileSystem calls 
 DFSClient.
 * After DFSClient gets into Aborted state, it doesn't have to reject all 
 requests , instead it can retry. If NN is available again it can transition 
 to healthy state.
 Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-17 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214820#comment-14214820
 ] 

Gera Shegalov commented on HDFS-7398:
-

Thanks for reviewing, Colin!

bq. But we can just have the derived class reset methods call baseReset, right? 
We shouldn't need to call two functions to do the reset.

I should have written that I considered that as well. However, my main 
motivation was to make it less likely that for a new op one forgets to call 
{{super#reset}}. That is why I chose the current design. What is your opinion 
looking from that angle?

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-17 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7398:

Attachment: HDFS-7398.v02.patch

[~cnauroth], this is indeed much more elegant! Here is the v02.

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch, HDFS-7398.v02.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-17 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215671#comment-14215671
 ] 

Gera Shegalov commented on HDFS-7398:
-

bq. This version looks great to me. Just one minor nitpick: let's mark 
FSEditLogOp#resetSubFields as protected. That will enforce that visibility is 
open only for subclasses to implement, and not for other classes within the 
same package to call. 

Access to {{protected}} is the superset of the package scope according to the 
[JLS Section 
6.6.1|https://docs.oracle.com/javase/specs/jls/se8/html/jls-6.html#jls-6.6.1]. 
Not only can you call protected within the package but you can also do it 
outside the package in a subclass.

 

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch, HDFS-7398.v02.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-14 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14212525#comment-14212525
 ] 

Gera Shegalov commented on HDFS-7398:
-

Regarding the findbug warning:
{quote}
Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream$Packet.dataPos; locked 83% of time
{quote}
It's obviously unrelated. 

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7385) ThreadLocal used in FSEditLog class causes FSImage permission mess up

2014-11-13 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211495#comment-14211495
 ] 

Gera Shegalov commented on HDFS-7385:
-

It may be cleaner to introduce an abstract {{FSEditLogOp#reset()}} and call it 
in {{FSEditLog#logEdit(FSEditLogOp)}} that all ops need to override.

{code}
  try {
editLogStream.write(op);
  } catch (IOException ex) {
// All journals failed, it is handled in logSync.
  } finally {
op.reset();
  }
{code}

to catch similar problems with garbage sitting in TLS in the future.

 ThreadLocal used in FSEditLog class causes FSImage permission mess up
 -

 Key: HDFS-7385
 URL: https://issues.apache.org/jira/browse/HDFS-7385
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0, 2.5.0
Reporter: jiangyu
Assignee: jiangyu
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HDFS-7385.2.patch, HDFS-7385.patch


   We migrated our NameNodes from low configuration to high configuration 
 machines last week. Firstly,we  imported the current directory including 
 fsimage and editlog files from original ActiveNameNode to new ActiveNameNode 
 and started the New NameNode, then  changed the configuration of all 
 datanodes and restarted all of datanodes , then blockreport to new NameNodes 
 at once and send heartbeat after that.
Everything seemed perfect, but after we restarted Resoucemanager , 
 most of the users compained that their jobs couldn't be executed for the 
 reason of permission problem.
   We applied Acls in our clusters, and after migrated we found most of 
 the directories and files which were not set Acls before now had the 
 properties of Acls. That is the reason why users could not execute their 
 jobs.So we had to change most of the files permission to a+r and directories 
 permission to a+rx to make sure the jobs can be executed.
 After searching this problem for some days, i found there is a bug in 
 FSEditLog.java. The ThreadLocal variable cache in FSEditLog don’t set the 
 proper value in logMkdir and logOpenFile functions. Here is the code of 
 logMkdir:
   public void logMkDir(String path, INode newNode) {
 PermissionStatus permissions = newNode.getPermissionStatus();
 MkdirOp op = MkdirOp.getInstance(cache.get())
   .setInodeId(newNode.getId())
   .setPath(path)
   .setTimestamp(newNode.getModificationTime())
   .setPermissionStatus(permissions);
 AclFeature f = newNode.getAclFeature();
 if (f != null) {
   op.setAclEntries(AclStorage.readINodeLogicalAcl(newNode));
 }
 logEdit(op);
   }
   For example, if we mkdir with Acls through one handler(Thread indeed), 
 we set the AclEntries to the op from the cache. After that, if we mkdir 
 without any Acls setting and set through the same handler, the AclEnties from 
 the cache is the same with the last one which set the Acls, and because the 
 newNode have no AclFeature, we don’t have any chance to change it. Then the 
 editlog is wrong,record the wrong Acls. After the Standby load the editlogs 
 from journalnodes and  apply them to memory in SNN then savenamespace and 
 transfer the wrong fsimage to ANN, all the fsimages get wrong. The only 
 solution is to save namespace from ANN and you can get the right fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7385) ThreadLocal used in FSEditLog class causes FSImage permission mess up

2014-11-13 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211511#comment-14211511
 ] 

Gera Shegalov commented on HDFS-7385:
-

Thanks for feedback, Chris and Colin! I'll create a follow-up JIRA.

 ThreadLocal used in FSEditLog class causes FSImage permission mess up
 -

 Key: HDFS-7385
 URL: https://issues.apache.org/jira/browse/HDFS-7385
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0, 2.5.0
Reporter: jiangyu
Assignee: jiangyu
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HDFS-7385.2.patch, HDFS-7385.patch


   We migrated our NameNodes from low configuration to high configuration 
 machines last week. Firstly,we  imported the current directory including 
 fsimage and editlog files from original ActiveNameNode to new ActiveNameNode 
 and started the New NameNode, then  changed the configuration of all 
 datanodes and restarted all of datanodes , then blockreport to new NameNodes 
 at once and send heartbeat after that.
Everything seemed perfect, but after we restarted Resoucemanager , 
 most of the users compained that their jobs couldn't be executed for the 
 reason of permission problem.
   We applied Acls in our clusters, and after migrated we found most of 
 the directories and files which were not set Acls before now had the 
 properties of Acls. That is the reason why users could not execute their 
 jobs.So we had to change most of the files permission to a+r and directories 
 permission to a+rx to make sure the jobs can be executed.
 After searching this problem for some days, i found there is a bug in 
 FSEditLog.java. The ThreadLocal variable cache in FSEditLog don’t set the 
 proper value in logMkdir and logOpenFile functions. Here is the code of 
 logMkdir:
   public void logMkDir(String path, INode newNode) {
 PermissionStatus permissions = newNode.getPermissionStatus();
 MkdirOp op = MkdirOp.getInstance(cache.get())
   .setInodeId(newNode.getId())
   .setPath(path)
   .setTimestamp(newNode.getModificationTime())
   .setPermissionStatus(permissions);
 AclFeature f = newNode.getAclFeature();
 if (f != null) {
   op.setAclEntries(AclStorage.readINodeLogicalAcl(newNode));
 }
 logEdit(op);
   }
   For example, if we mkdir with Acls through one handler(Thread indeed), 
 we set the AclEntries to the op from the cache. After that, if we mkdir 
 without any Acls setting and set through the same handler, the AclEnties from 
 the cache is the same with the last one which set the Acls, and because the 
 newNode have no AclFeature, we don’t have any chance to change it. Then the 
 editlog is wrong,record the wrong Acls. After the Standby load the editlogs 
 from journalnodes and  apply them to memory in SNN then savenamespace and 
 transfer the wrong fsimage to ANN, all the fsimages get wrong. The only 
 solution is to save namespace from ANN and you can get the right fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-13 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-7398:
---

 Summary: Reset cached thread-local FSEditLogOp's on every 
FSEditLog#logEdit
 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-13 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7398:

Status: Patch Available  (was: Open)

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7398) Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit

2014-11-13 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7398:

Attachment: HDFS-7398.v01.patch

 Reset cached thread-local FSEditLogOp's on every FSEditLog#logEdit
 --

 Key: HDFS-7398
 URL: https://issues.apache.org/jira/browse/HDFS-7398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HDFS-7398.v01.patch


 This is a follow-up on HDFS-7385.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7128) Decommission slows way down when it gets towards the end

2014-09-22 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7128:

Assignee: (was: Gera Shegalov)

 Decommission slows way down when it gets towards the end
 

 Key: HDFS-7128
 URL: https://issues.apache.org/jira/browse/HDFS-7128
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 When we decommission nodes across different racks, the decommission process 
 becomes really slow at the end, hardly making any progress. The problem is 
 some blocks are on 3 decomm-in-progress DNs and the way how replications are 
 scheduled caused unnecessary delay. Here is the analysis.
 When BlockManager schedules the replication work from neededReplication, it 
 first needs to pick the source node for replication via chooseSourceDatanode. 
 The core policies to pick the source node are:
 1. Prefer decomm-in-progress node.
 2. Only pick the nodes whose outstanding replication counts are below 
 thresholds dfs.namenode.replication.max-streams or 
 dfs.namenode.replication.max-streams-hard-limit, based on the replication 
 priority.
 When we decommission nodes,
 1. All the decommission nodes' blocks will be added to neededReplication.
 2. BM will pick X number of blocks from neededReplication in each iteration. 
 X is based on cluster size and some configurable multiplier. So if the 
 cluster has 2000 nodes, X will be around 4000.
 3. Given these 4000 nodes are on the same decomm-in-progress node A, A end up 
 being chosen as the source node of all these 4000 nodes. The reason the 
 outstanding replication thresholds don't kick is due to the implementation of 
 BlockManager.computeReplicationWorkForBlocks; 
 node.getNumberOfBlocksToBeReplicated() remains zero given 
 node.addBlockToBeReplicated is called after source node iteration.
 {noformat}
 ...
   synchronized (neededReplications) {
 for (int priority = 0; priority  blocksToReplicate.size(); 
 priority++) {
 ...
 chooseSourceDatanode
 ...
 }
   for(ReplicationWork rw : work){
 ...
   rw.srcNode.addBlockToBeReplicated(block, targets);
 ...
   }
 {noformat}
  
 4. So several decomm-in-progress nodes A, B, C end up with 4000 
 node.getNumberOfBlocksToBeReplicated().
 5. If we assume each node can replicate 5 blocks per minutes, it is going to 
 take 800 minutes to finish replication of these blocks.
 6. Pending replication timeout kick in after 5 minutes. The items will be 
 removed from the pending replication queue and added back to 
 neededReplication. The replications will then be handled by other source 
 nodes of these blocks. But the blocks still remain in nodes A, B, C's pending 
 replication queue, DatanodeDescriptor.replicateBlocks, so A, B, C continue 
 the replications of these blocks, although these blocks might have been 
 replicated by other DNs after replication timeout.
 7. Some block' replicas exist on A, B, C and it is at the end of A's pending 
 replication queue. Even though the block's replication timeout, no source 
 node can be chosen given A, B, C all have high pending replication count. So 
 we have to wait until A drains its pending replication queue. Meanwhile, the 
 items in A's pending replication queue have been taken care of by other nodes 
 and no longer under replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7128) Decommission slows way down when it gets towards the end

2014-09-22 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov reassigned HDFS-7128:
---

Assignee: Gera Shegalov

 Decommission slows way down when it gets towards the end
 

 Key: HDFS-7128
 URL: https://issues.apache.org/jira/browse/HDFS-7128
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Gera Shegalov

 When we decommission nodes across different racks, the decommission process 
 becomes really slow at the end, hardly making any progress. The problem is 
 some blocks are on 3 decomm-in-progress DNs and the way how replications are 
 scheduled caused unnecessary delay. Here is the analysis.
 When BlockManager schedules the replication work from neededReplication, it 
 first needs to pick the source node for replication via chooseSourceDatanode. 
 The core policies to pick the source node are:
 1. Prefer decomm-in-progress node.
 2. Only pick the nodes whose outstanding replication counts are below 
 thresholds dfs.namenode.replication.max-streams or 
 dfs.namenode.replication.max-streams-hard-limit, based on the replication 
 priority.
 When we decommission nodes,
 1. All the decommission nodes' blocks will be added to neededReplication.
 2. BM will pick X number of blocks from neededReplication in each iteration. 
 X is based on cluster size and some configurable multiplier. So if the 
 cluster has 2000 nodes, X will be around 4000.
 3. Given these 4000 nodes are on the same decomm-in-progress node A, A end up 
 being chosen as the source node of all these 4000 nodes. The reason the 
 outstanding replication thresholds don't kick is due to the implementation of 
 BlockManager.computeReplicationWorkForBlocks; 
 node.getNumberOfBlocksToBeReplicated() remains zero given 
 node.addBlockToBeReplicated is called after source node iteration.
 {noformat}
 ...
   synchronized (neededReplications) {
 for (int priority = 0; priority  blocksToReplicate.size(); 
 priority++) {
 ...
 chooseSourceDatanode
 ...
 }
   for(ReplicationWork rw : work){
 ...
   rw.srcNode.addBlockToBeReplicated(block, targets);
 ...
   }
 {noformat}
  
 4. So several decomm-in-progress nodes A, B, C end up with 4000 
 node.getNumberOfBlocksToBeReplicated().
 5. If we assume each node can replicate 5 blocks per minutes, it is going to 
 take 800 minutes to finish replication of these blocks.
 6. Pending replication timeout kick in after 5 minutes. The items will be 
 removed from the pending replication queue and added back to 
 neededReplication. The replications will then be handled by other source 
 nodes of these blocks. But the blocks still remain in nodes A, B, C's pending 
 replication queue, DatanodeDescriptor.replicateBlocks, so A, B, C continue 
 the replications of these blocks, although these blocks might have been 
 replicated by other DNs after replication timeout.
 7. Some block' replicas exist on A, B, C and it is at the end of A's pending 
 replication queue. Even though the block's replication timeout, no source 
 node can be chosen given A, B, C all have high pending replication count. So 
 we have to wait until A drains its pending replication queue. Meanwhile, the 
 items in A's pending replication queue have been taken care of by other nodes 
 and no longer under replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-09-13 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132981#comment-14132981
 ] 

Gera Shegalov commented on HDFS-6888:
-

Thanks, [~airbots] for updating the patch.  +1 (non-binding)

Two nits:
debugCmdSet can be declared as a general Set
{code}
private SetString debugCmdSet = new HashSetString();
{code}

Its initialization can be more brief:
{code}
  debugCmdSet.addAll(Arrays.asList(conf.getTrimmedStrings(
  DFSConfigKeys.DFS_AUDIT_LOG_DEBUG_CMDLIST)));
{code}


 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-22 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107394#comment-14107394
 ] 

Gera Shegalov commented on HDFS-6888:
-

Hi [~airbots], sorry for being unclear.

[~kihwal] suggests:
bq. We could have logAuditEvent() check cmd against getfileinfo or a 
*collection of such commands* and log at debug level. 

Picking this idea up can you introduce some conf like 
dfs.audit.loglevel.cmdlist=getfileinfo,anotherLogFloodingCmd,...

In {{o.a.h.hdfs.server.namenode.FSNamesystem.DefaultAuditLogger#initialize}} 
you could read the list using {{auditDebugCmds 
conf.getTrimmedStrings(dfs.audit.debug.cmdlist}} and use it for filtering. 
Currenly v2 hardcodes getfileinfo.

{code}
--- 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -359,6 +359,9 @@ private void logAuditEvent(boolean succeeded,
   UserGroupInformation ugi, InetAddress addr, String cmd, String src,
   String dst, HdfsFileStatus stat) {
 FileStatus status = null;
+if(cmd.equals(getfileinfo)  !auditLog.isDebugEnabled()) {
+  return;
+}
 if (stat != null) {
   Path symlink = stat.isSymlink() ? new Path(stat.getSymlink()) : null;
   Path path = dst != null ? new Path(dst) : new Path(src);
{code}

Also auditLog.isDebugEnabled() is a cheaper check, and should be done before 
{{equals}}




 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888-2.patch, HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14105623#comment-14105623
 ] 

Gera Shegalov commented on HDFS-6888:
-

getfileinfo a common RPC but we have identified many times using the audit log 
an app that needs some help in optimizing their FileSystem API usage.

 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6888) Remove audit logging of getFIleInfo()

2014-08-21 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106362#comment-14106362
 ] 

Gera Shegalov commented on HDFS-6888:
-

[~kihwal], I am +1 for making some commands debug level, so we have an option 
to capture them in the logs and remove them.
[~airbots], how about making the list of DEBUG-level commands configurable via 
a csv list for conf.getTrimmedStrings instead of hardcoding it as in v2 of the 
patch. 


 Remove audit logging of getFIleInfo()
 -

 Key: HDFS-6888
 URL: https://issues.apache.org/jira/browse/HDFS-6888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Kihwal Lee
Assignee: Chen He
  Labels: log
 Attachments: HDFS-6888-2.patch, HDFS-6888.patch


 The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
 one of the most called method, users have noticed that audit log is now 
 filled with this.  Since we now have HTTP request logging, this seems 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6540) TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames

2014-06-16 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032202#comment-14032202
 ] 

Gera Shegalov commented on HDFS-6540:
-

This issue seems to apply only to branch-2.4. We should change target version 
to 2.4.1. 

It's cumbersome to define a regex for usernames. For example, usernames must 
not start with '-' but may contain a '.' . In order to avoid dealing with this, 
you can paste the value of {{System.getProperty(user.name)}} for this 
component of regex.

{code}
([d\\-])([rwx\\-]{9})\\s*(-|\\d+)\\s* + System.getProperty(user.name) + 
\\s*([a-zA-Z_0-9\\-]+)\\s*(\\d+)\\s*(\\d+)\\s*([\b/]+)
{code}

 TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames
 

 Key: HDFS-6540
 URL: https://issues.apache.org/jira/browse/HDFS-6540
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HDFS-6540.patch


 TestOfflineImageViewer.outputOfLSVisitor() fails if the username contains - 
 (dash). A dash is a valid character in a username.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6452) ConfiguredFailoverProxyProvider should randomize currentProxyIndex on initialization

2014-05-27 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-6452:
---

 Summary: ConfiguredFailoverProxyProvider should randomize 
currentProxyIndex on initialization
 Key: HDFS-6452
 URL: https://issues.apache.org/jira/browse/HDFS-6452
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, hdfs-client
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


We observe that the clients iterate proxies in the fixed order. Depending on 
the order of namenodes in dfs.ha.namenodes.nameservice (e.g. 'nn1,nn2') and 
the current standby (nn1), all the clients will hit nn1 first, and then 
failover to nn2.  Chatting with [~lohit] we think we can simply select the 
initial value of {{currentProxyIndex}} randomly, and keep the logic of 
{{performFailover}} of iterating from left-to-right. This should halve the 
unnecessary load on standby NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6452) ConfiguredFailoverProxyProvider should randomize currentProxyIndex on initialization

2014-05-27 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010631#comment-14010631
 ] 

Gera Shegalov commented on HDFS-6452:
-

Hi Aaron, the net improvement should be in the overhead smoothness over time. 
E.g., we will smooth the storm of {{StandbyException: Operation category READ 
is not supported in state standby}}

[~jingzhao], this is targeted for deployments with automatic failover where the 
emphasis is on not trying to watch what NN is active all the time. 

 ConfiguredFailoverProxyProvider should randomize currentProxyIndex on 
 initialization
 

 Key: HDFS-6452
 URL: https://issues.apache.org/jira/browse/HDFS-6452
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, hdfs-client
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov

 We observe that the clients iterate proxies in the fixed order. Depending on 
 the order of namenodes in dfs.ha.namenodes.nameservice (e.g. 'nn1,nn2') and 
 the current standby (nn1), all the clients will hit nn1 first, and then 
 failover to nn2.  Chatting with [~lohit] we think we can simply select the 
 initial value of {{currentProxyIndex}} randomly, and keep the logic of 
 {{performFailover}} of iterating from left-to-right. This should halve the 
 unnecessary load on standby NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6452) ConfiguredFailoverProxyProvider should randomize currentProxyIndex on initialization

2014-05-27 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14010672#comment-14010672
 ] 

Gera Shegalov commented on HDFS-6452:
-

Lohit is correct, when we implement readable standby similar to what is 
provided by some database systems, the fraction of failed requests even in 
normal case will be well below 50%. Making randomization optional is a good 
idea.

 ConfiguredFailoverProxyProvider should randomize currentProxyIndex on 
 initialization
 

 Key: HDFS-6452
 URL: https://issues.apache.org/jira/browse/HDFS-6452
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, hdfs-client
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov

 We observe that the clients iterate proxies in the fixed order. Depending on 
 the order of namenodes in dfs.ha.namenodes.nameservice (e.g. 'nn1,nn2') and 
 the current standby (nn1), all the clients will hit nn1 first, and then 
 failover to nn2.  Chatting with [~lohit] we think we can simply select the 
 initial value of {{currentProxyIndex}} randomly, and keep the logic of 
 {{performFailover}} of iterating from left-to-right. This should halve the 
 unnecessary load on standby NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988058#comment-13988058
 ] 

Gera Shegalov commented on HDFS-6193:
-

Hi [~ozawa], yeah Hftp was recently kicked out with HDFS-5570

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988171#comment-13988171
 ] 

Gera Shegalov commented on HDFS-6193:
-

Will upload a fixed version shortly.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Attachment: HDFS-6193-branch-2.4.v02.patch

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-08 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963139#comment-13963139
 ] 

Gera Shegalov commented on HDFS-6143:
-

[~daryn], thanks for review. I agree that there is a performance cost 
associated with streaming too early if we are going to seek right away, e.g, to 
a split offset. What do think of just invoking {{getFileStatus}} within as the 
first thing in {{WebHdfsFileSystem.open}} 

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v02.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-08 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963576#comment-13963576
 ] 

Gera Shegalov commented on HDFS-6143:
-

[~daryn], I think putting calling {{getFileStatus}} in open to get FNF, and 
keep the old lazy open will have the same effect.

{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/
index bdf744a..bad1534 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -898,6 +898,7 @@ public boolean delete(Path f, boolean recursive) throws 
IOException {
   @Override
   public FSDataInputStream open(final Path f, final int buffersize
   ) throws IOException {
+getFileStatus(f);
 statistics.incrementReadOps(1);
 final HttpOpParam.Op op = GetOpParam.Op.OPEN;
 final URL url = toUrl(op, f, new BufferSizeParam(buffersize));
{code}


 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v02.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-08 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963595#comment-13963595
 ] 

Gera Shegalov commented on HDFS-6143:
-

[~szetszwo], it's easy to understand the issue if your review the unit test 
modifications in the patch. As for our production use case, that works fine on 
the LocalFileSystem and HDFS, please review  splittable Lzo in elephant bird. 
Hadoop-lzo's {{LzoIndex.readIndex}} reads the index file and exploits the fact 
the FS should [error out opening a non-existing 
file|https://github.com/twitter/hadoop-lzo/blob/master/src/main/java/com/hadoop/compression/lzo/LzoIndex.java#L176],
 instead of using an RPC to get the file status  directly or via {{exists}}.

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v02.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-07 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13962224#comment-13962224
 ] 

Gera Shegalov commented on HDFS-6143:
-

Hi [~wheat9],

bq. Should this be 404 instead of 410? In UNIX, opening a unresolved symlink 
return ENOENT.

That's server-side. On the client side, {{HttpUrlConnection}} translates 
{{GONE}} to {{FileNotFoundException}} as well. 

On another note, if you look at the API for {{AbstractFileSystem}}  a potential 
future WebHdfs implementation will be able to surface the real cause as 
{{UnresolvedLinkException}} 

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-07 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143-trunk-after-HDFS-5570.v02.patch

Removing the ExceptionHandler change to be addressed as a separate JIRA

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v02.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6199) WebHdfsFileSystem does not treat UnresolvedLinkException

2014-04-07 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-6199:
---

 Summary: WebHdfsFileSystem does not treat UnresolvedLinkException
 Key: HDFS-6199
 URL: https://issues.apache.org/jira/browse/HDFS-6199
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Critical


{{o.a.h.hdfs.web.resources.ExceptionHandler.toResponse}} does not treat 
{{UnresolvedLinkException}} . This implies the client-side will generate a 
generic {{IOException}} or an unchecked exception. 

The server should return {{GONE}} that can be translated to 
{{FileNotFoundException}} or the actual {{UnresolvedLinkException}} if the API 
allows it.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961611#comment-13961611
 ] 

Gera Shegalov commented on HDFS-6193:
-

[~ste...@apache.org], thanks for following up. 

bq. Interesting that FileSystemContractBaseTest doesn't catch this

FileSystemContractBaseTest does not have a test for {{open}} on a 
non-exisisting path. Neither did {{TestHftpFileSystem}}. 
{{TestWebHdfsFileSystemContract.testOpenNonExistFile}} had incorrect 
implementation that relied on {{read}} to fail.

bq. We could optimise any of the web filesystems by not doing that open (e,g, 
S3, s3n, swift) and waiting for the first seek. But we don't because things 
expect missing files to not be there.

Note that a seek for WebHdfs/Hftp is a client-only operation as well. Deferring 
real open to a stream operation is misleading because the application presumes 
an open stream when issuing a stream operation.




 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker

 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143-trunk-after-HDFS-5570.v01.patch

[~wheat9], I am uploading a new patch for trunk, and will follow up with 
patches for branch-2.4.0. The change to ExceptionHandler was done specifically 
to address [~jingzhao]'s comments in the original patch.

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch, 
 HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Attachment: HDFS-6193-branch-2.4.0.v01.patch

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Affects Version/s: (was: 2.3.0)
   2.4.0
   Status: Patch Available  (was: Open)

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143-branch-2.4.0.v01.patch

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-05 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-6193:
---

 Summary: HftpFileSystem open should throw FileNotFoundException 
for non-existing paths
 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker


WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing 
paths. 
- 'open', does not really open anything, i.e., it does not contact the server, 
and therefore cannot discover FileNotFound, it's deferred until next read. It's 
counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on 
open. 
[LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
 is an example of the code that's broken because of this.

- On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead 
of SC_NOT_FOUND for non-exitsing paths





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-05 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Summary: WebHdfsFileSystem open should throw FileNotFoundException for 
non-existing paths  (was: (WebHdfs|Hftp)FileSystem open should throw 
FileNotFoundException for non-existing paths)

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-05 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v05.patch

I split the patch as requested. However, I hope that the fix for both hftp and 
webhdfs will be merged. It's pretty straightforward because they share the 
logic of {{ByteRangeInputStream}}.

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v05.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-05 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v06.patch

The v05 patch did not apply because HDFS-5570 removed TestByteRangeInputStream. 
Was it intentional?

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-05 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961339#comment-13961339
 ] 

Gera Shegalov commented on HDFS-6143:
-

Test failure seems unrelated and reported earlier in HDFS-6160. Rerun 
org.apache.hadoop.hdfs.TestSafeMode on my laptop succeeded.

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-04-02 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957402#comment-13957402
 ] 

Gera Shegalov commented on HDFS-6143:
-

The exception in 
https://builds.apache.org/job/PreCommit-HDFS-Build/6561//testReport/ is 
reported in HDFS-6183.

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v03.patch

v03 for review. 

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Critical
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Status: Patch Available  (was: Open)

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Critical
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Priority: Blocker  (was: Critical)

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v04.patch

v04 to resolve merge conflicts with HDFS-4564. 

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Description: 
WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing 
paths. 
- 'open', does not really open anything, i.e., it does not contact the server, 
and therefore cannot discover FileNotFound, it's deferred until next read. It's 
counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on 
open. 
[LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
 is an example of the code that's broken because of this.

- On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead 
of SC_NOT_FOUND for non-exitsing paths



  was:
HftpFileSystem.open incorrectly handles non-existing paths. 
- 'open', does not really open anything, i.e., it does not contact the server, 
and therefore cannot discover FileNotFound, it's deferred until next read. It's 
counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on 
open. 
[LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
 is an example of the code that's broken because of this.

- On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead 
of SC_NOT_FOUND for non-exitsing paths




 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v04.patch

Not sure yet whether 
{{org.apache.hadoop.hdfs.web.TestWebHDFS#testNamenodeRestart}} is related. It 
passes locally. Resubmitting the patch to get another test run.

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
 HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-30 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Summary: (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException 
for non-existing paths  (was: HftpFileSystem open should throw 
FileNotFoundException for non-existing paths)

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Critical
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths

2014-03-30 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954844#comment-13954844
 ] 

Gera Shegalov commented on HDFS-6143:
-

WebHdfsFileSystem has the same problem. I'll update the patch to reflect.

 (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for 
 non-existing paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Critical
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-03-24 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v02.patch

[~ste...@apache.org], thanks for the review. I tried to keep the patch as small 
as possible. Here is an updated v02 of the patch to accommodate your comments. 
I noticed that there are more issues with how server-side exceptions are 
translated in FileDataServlet, and made it more elaborate.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6143) HftpFileSystem open should through FileNotFoundException for non-existing paths

2014-03-23 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-6143:
---

 Summary: HftpFileSystem open should through FileNotFoundException 
for non-existing paths
 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


HftpFileSystem.open incorrectly handles non-existing paths. 
- 'open', does not really open anything, i.e., it does not contact the server, 
and therefore cannot discover FileNotFound, it's deferred until next read. It's 
counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on 
open. 
[LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
 is an example of the code that's broken because of this.

- On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead 
of SC_NOT_FOUND for non-exitsing paths





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) HftpFileSystem open should through FileNotFoundException for non-existing paths

2014-03-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Priority: Blocker  (was: Major)
Target Version/s: 2.4.0

 HftpFileSystem open should through FileNotFoundException for non-existing 
 paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker

 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) HftpFileSystem open should through FileNotFoundException for non-existing paths

2014-03-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143.v01.patch

v01 of the patch for review

 HftpFileSystem open should through FileNotFoundException for non-existing 
 paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) HftpFileSystem open should through FileNotFoundException for non-existing paths

2014-03-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Status: Patch Available  (was: Open)

 HftpFileSystem open should through FileNotFoundException for non-existing 
 paths
 ---

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6143.v01.patch


 HftpFileSystem.open incorrectly handles non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6045) A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all path components.

2014-03-03 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6045:


Description: 
This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
Distributed Cache. The deeper the path the more beneficial is the feature.


  was:This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of 
PUBLIC Distributed Cache.


 A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all 
 path components.
 --

 Key: HDFS-6045
 URL: https://issues.apache.org/jira/browse/HDFS-6045
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Gera Shegalov

 This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
 Distributed Cache. The deeper the path the more beneficial is the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6045) A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all path components.

2014-03-03 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-6045:
---

 Summary: A single RPC API: FileStatus[] getFileStatus(Path f) to 
get status of all path components.
 Key: HDFS-6045
 URL: https://issues.apache.org/jira/browse/HDFS-6045
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Gera Shegalov


This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
Distributed Cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6045) A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all path components.

2014-03-03 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918688#comment-13918688
 ] 

Gera Shegalov commented on HDFS-6045:
-

bq. [~chris.douglas] mentioned on YARN-1771 Symlinks might be awkward to 
support, but that discussion is for a separate ticket. Do you have a JIRA ref?.

For simplicity, I think the semantics should return array corresponding to a 
fully resolved path that does not contain any symlinks.

If f=/a/b/c/d/e/f and b is a symlink b-/tmp/x and e is symlink e-y 

then the the returned array will correspond to /tmp/x/c/d/y/f

 A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all 
 path components.
 --

 Key: HDFS-6045
 URL: https://issues.apache.org/jira/browse/HDFS-6045
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Gera Shegalov

 This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
 Distributed Cache. The deeper the path the more beneficial is the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6045) A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all path components.

2014-03-03 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov reassigned HDFS-6045:
---

Assignee: Gera Shegalov

 A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all 
 path components.
 --

 Key: HDFS-6045
 URL: https://issues.apache.org/jira/browse/HDFS-6045
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Gera Shegalov
Assignee: Gera Shegalov

 This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
 Distributed Cache. The deeper the path the more beneficial is the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6045) A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all path components.

2014-03-03 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918900#comment-13918900
 ] 

Gera Shegalov commented on HDFS-6045:
-

Hi Andrew, yes I missed the fact that symlinks can target absolute URI to 
different filesystems. You captured the goal for this JIRA correctly.



 A single RPC API: FileStatus[] getFileStatus(Path f) to get status of all 
 path components.
 --

 Key: HDFS-6045
 URL: https://issues.apache.org/jira/browse/HDFS-6045
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Gera Shegalov

 This comes up in YARN-1771/MAPREDUCE-4907 on the server/client side of PUBLIC 
 Distributed Cache. The deeper the path the more beneficial is the feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5821) TestHDFSCLI fails for user names with the dash character

2014-02-24 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13911101#comment-13911101
 ] 

Gera Shegalov commented on HDFS-5821:
-

Thanks for looking into this, [~arpitagarwal]! I was not sure whether this 
fragment is even needed because it's commented out.

 TestHDFSCLI fails for user names with the dash character
 

 Key: HDFS-5821
 URL: https://issues.apache.org/jira/browse/HDFS-5821
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov
 Attachments: HDFS-5821-trunk.v01.patch


 testHDFSConf.xml uses regexes inconsistently to match the username from 
 {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. This by far does not cover 
 the space of possible OS user names.  For us, it fails for a user name 
 containing {{'-'}}. Instead of keeping updating regex, we propose to use the 
 macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5820) TesHDFSCLI does not work for user names with '-'

2014-02-12 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5820.
-

Resolution: Duplicate

 TesHDFSCLI does not work for user names with '-'
 

 Key: HDFS-5820
 URL: https://issues.apache.org/jira/browse/HDFS-5820
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Gera Shegalov





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-12 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5879:


Fix Version/s: 2.3.0
   Status: Patch Available  (was: Open)

 TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis
 -

 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov
 Fix For: 2.3.0

 Attachments: HDFS-5879.v01.patch


 FSDataInputStream should be closed once no longer needed for reading in  
 testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-12 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5879:


Status: Open  (was: Patch Available)

 TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis
 -

 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov
 Attachments: HDFS-5879.v01.patch


 FSDataInputStream should be closed once no longer needed for reading in  
 testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-05 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892696#comment-13892696
 ] 

Gera Shegalov commented on HDFS-5879:
-

javadoc issue is tracked in trunk by HADOOP-10325

 TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis
 -

 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov
 Attachments: HDFS-5879.v01.patch


 FSDataInputStream should be closed once no longer needed for reading in  
 testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-04 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-5879:
---

 Summary: TestHftpFileSystem: testFileNameEncoding and testSeek 
leak open fsdis
 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov


FSDataInputStream should be closed once no longer needed for reading in  
testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-04 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5879:


Attachment: HDFS-5879.v01.patch

 TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis
 -

 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov
 Attachments: HDFS-5879.v01.patch


 FSDataInputStream should be closed once no longer needed for reading in  
 testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5879) TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis

2014-02-04 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5879:


Status: Patch Available  (was: Open)

 TestHftpFileSystem: testFileNameEncoding and testSeek leak open fsdis
 -

 Key: HDFS-5879
 URL: https://issues.apache.org/jira/browse/HDFS-5879
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Gera Shegalov
 Attachments: HDFS-5879.v01.patch


 FSDataInputStream should be closed once no longer needed for reading in  
 testFileNameEncoding and testSeek.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5811) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5811.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5811
 URL: https://issues.apache.org/jira/browse/HDFS-5811
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {code}-{code}. So instead of keeping updating regex, I 
 propose to replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5816) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5816.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5816
 URL: https://issues.apache.org/jira/browse/HDFS-5816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {{-}}. So instead of keeping updating regex, I propose to 
 replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5812) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5812.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5812
 URL: https://issues.apache.org/jira/browse/HDFS-5812
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {code}-{code}. So instead of keeping updating regex, I 
 propose to replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5817) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5817.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5817
 URL: https://issues.apache.org/jira/browse/HDFS-5817
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {{-}}. So instead of keeping updating regex, I propose to 
 replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5819) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5819.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5819
 URL: https://issues.apache.org/jira/browse/HDFS-5819
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {{-}}. So instead of keeping updating regex, I propose to 
 replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5818) TestHDFSCLI fails for a user name with dash

2014-01-31 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov resolved HDFS-5818.
-

Resolution: Duplicate

ASF JIRA was flaky and created duplicates while reporting an error to the 
frontend.

 TestHDFSCLI fails for a user name with dash
 ---

 Key: HDFS-5818
 URL: https://issues.apache.org/jira/browse/HDFS-5818
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov

 testHDFSConf.xml uses a regex to describe username. Regexes are used 
 incosistently from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. Clearly OS 
 are less restrictive than that, and for us specifically the test fails for a 
 build user having a {{-}}. So instead of keeping updating regex, I propose to 
 replace it with the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5821) TestHDFSCLI fails for user names with the dash character

2014-01-23 Thread Gera Shegalov (JIRA)
Gera Shegalov created HDFS-5821:
---

 Summary: TestHDFSCLI fails for user names with the dash character
 Key: HDFS-5821
 URL: https://issues.apache.org/jira/browse/HDFS-5821
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov


testHDFSConf.xml uses regexes inconsistently to match the username from 
{{[a-zA-z0-9]*}} to {{[a-z]*}}. This by far does not cover the space of 
possible OS user names.  For us, it fails for a user name containing a {{-}}. 
Instead of keeping updating regex, we propose to use the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5821) TestHDFSCLI fails for user names with the dash character

2014-01-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5821:


Attachment: HDFS-5821-trunk.v01.patch

Patch with the proposed fix

 TestHDFSCLI fails for user names with the dash character
 

 Key: HDFS-5821
 URL: https://issues.apache.org/jira/browse/HDFS-5821
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov
 Attachments: HDFS-5821-trunk.v01.patch


 testHDFSConf.xml uses regexes inconsistently to match the username from 
 {{[a-zA-z0-9]*}} to {{[a-z]*}}. This by far does not cover the space of 
 possible OS user names.  For us, it fails for a user name containing a {{-}}. 
 Instead of keeping updating regex, we propose to use the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5821) TestHDFSCLI fails for user names with the dash character

2014-01-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5821:


Status: Patch Available  (was: Open)

 TestHDFSCLI fails for user names with the dash character
 

 Key: HDFS-5821
 URL: https://issues.apache.org/jira/browse/HDFS-5821
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov
 Attachments: HDFS-5821-trunk.v01.patch


 testHDFSConf.xml uses regexes inconsistently to match the username from 
 {{[a-zA-z0-9]*}} to {{[a-z]*}}. This by far does not cover the space of 
 possible OS user names.  For us, it fails for a user name containing a {{-}}. 
 Instead of keeping updating regex, we propose to use the macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5821) TestHDFSCLI fails for user names with the dash character

2014-01-23 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-5821:


Description: testHDFSConf.xml uses regexes inconsistently to match the 
username from {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. This by far does 
not cover the space of possible OS user names.  For us, it fails for a user 
name containing {{'-'}}. Instead of keeping updating regex, we propose to use 
the macro USERNAME.  (was: testHDFSConf.xml uses regexes inconsistently to 
match the username from {{[a-zA-z0-9]*}} to {{[a-z]*}}. This by far does not 
cover the space of possible OS user names.  For us, it fails for a user name 
containing a {{-}}. Instead of keeping updating regex, we propose to use the 
macro USERNAME.)

 TestHDFSCLI fails for user names with the dash character
 

 Key: HDFS-5821
 URL: https://issues.apache.org/jira/browse/HDFS-5821
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
Reporter: Gera Shegalov
 Attachments: HDFS-5821-trunk.v01.patch


 testHDFSConf.xml uses regexes inconsistently to match the username from 
 {code}[a-zA-z0-9]*{code} to {code}[a-z]*{code}. This by far does not cover 
 the space of possible OS user names.  For us, it fails for a user name 
 containing {{'-'}}. Instead of keeping updating regex, we propose to use the 
 macro USERNAME.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)