[jira] Created: (HDFS-1335) HDFS side of HADOOP-6904: first step towards inter-version communications between dfs client and NameNode

2010-08-09 Thread Hairong Kuang (JIRA)
HDFS side of HADOOP-6904: first step towards inter-version communications 
between dfs client and NameNode
-

 Key: HDFS-1335
 URL: https://issues.apache.org/jira/browse/HDFS-1335
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0


The idea is that for getProtocolVersion, NameNode checks if the client and 
server versions are compatible if the server version is greater than the client 
version. If no, throws a VersionIncompatible exception; otherwise, returns the 
server version.

On the dfs client side, when creating a NameNode proxy, catches the 
VersionMismatch exception and then checks if the client version and the server 
version are compatible if the client version is greater than the server 
version. If not compatible, throws exception VersionIncomptible; otherwise, 
records the server version and continues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-320) Namenode should return lease recovery request with other requests

2010-08-09 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896640#action_12896640
 ] 

Kan Zhang commented on HDFS-320:


I think I have worked around this issue. I'm fine if you resolve it as non-fix.

 Namenode should return lease recovery request with other requests
 -

 Key: HDFS-320
 URL: https://issues.apache.org/jira/browse/HDFS-320
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: HADOOP-5481.patch


 HADOOP-5034 modified NN to return both replication and deletion requests to 
 DN in one reply to a heartbeat. However, the lease recovery request is still 
 sent separately by itself. Is there a reason for this? If not, I suggest we 
 combine them together. This will make it less confusing when adding new types 
 of requests, which are combinable as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1335) HDFS side of HADOOP-6904: first step towards inter-version communications between dfs client and NameNode

2010-08-09 Thread Hairong Kuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HDFS-1335:


Attachment: hdfsRpcVersion.patch

Here is a patch for review.

 HDFS side of HADOOP-6904: first step towards inter-version communications 
 between dfs client and NameNode
 -

 Key: HDFS-1335
 URL: https://issues.apache.org/jira/browse/HDFS-1335
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: hdfsRpcVersion.patch


 The idea is that for getProtocolVersion, NameNode checks if the client and 
 server versions are compatible if the server version is greater than the 
 client version. If no, throws a VersionIncompatible exception; otherwise, 
 returns the server version.
 On the dfs client side, when creating a NameNode proxy, catches the 
 VersionMismatch exception and then checks if the client version and the 
 server version are compatible if the client version is greater than the 
 server version. If not compatible, throws exception VersionIncomptible; 
 otherwise, records the server version and continues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1335) HDFS side of HADOOP-6904: first step towards inter-version communications between dfs client and NameNode

2010-08-09 Thread Doug Cutting (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896655#action_12896655
 ] 

Doug Cutting commented on HDFS-1335:


If I understand the patch correctly, it still requires an exact version match 
by default.  That seems good.  What I don't understand is how you expect that 
to be altered.  Do you expect folks to update the implementation of 
ProtocolCompatible as protocols evolve?  Perhaps you can give some examples of 
how you expect this to work?

Since there's more than one protocol in HDFS, do you expect to add more methods 
to ProtocolCompatible for each protocol?

 HDFS side of HADOOP-6904: first step towards inter-version communications 
 between dfs client and NameNode
 -

 Key: HDFS-1335
 URL: https://issues.apache.org/jira/browse/HDFS-1335
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: hdfsRpcVersion.patch


 The idea is that for getProtocolVersion, NameNode checks if the client and 
 server versions are compatible if the server version is greater than the 
 client version. If no, throws a VersionIncompatible exception; otherwise, 
 returns the server version.
 On the dfs client side, when creating a NameNode proxy, catches the 
 VersionMismatch exception and then checks if the client version and the 
 server version are compatible if the client version is greater than the 
 server version. If not compatible, throws exception VersionIncomptible; 
 otherwise, records the server version and continues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-320) Namenode should return lease recovery request with other requests

2010-08-09 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan resolved HDFS-320.
--

Resolution: Won't Fix

Resolving as Won't Fix.

 Namenode should return lease recovery request with other requests
 -

 Key: HDFS-320
 URL: https://issues.apache.org/jira/browse/HDFS-320
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: HADOOP-5481.patch


 HADOOP-5034 modified NN to return both replication and deletion requests to 
 DN in one reply to a heartbeat. However, the lease recovery request is still 
 sent separately by itself. Is there a reason for this? If not, I suggest we 
 combine them together. This will make it less confusing when adding new types 
 of requests, which are combinable as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1330) Make RPCs to DataNodes timeout

2010-08-09 Thread sam rash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896684#action_12896684
 ] 

sam rash commented on HDFS-1330:


+1 lgtm


 Make RPCs to DataNodes timeout
 --

 Key: HDFS-1330
 URL: https://issues.apache.org/jira/browse/HDFS-1330
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: hdfsRpcTimeout.patch


 This jira aims to make client/datanode or datanode/datanode RPC to have a 
 timeout of DataNode#socketTimeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1320) Add LOG.isDebugEnabled() guard for each LOG.debug(...)

2010-08-09 Thread Erik Steffl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Steffl updated HDFS-1320:
--

Attachment: HDFS-1320-0.22-1.patch

 Add LOG.isDebugEnabled() guard for each LOG.debug(...)
 

 Key: HDFS-1320
 URL: https://issues.apache.org/jira/browse/HDFS-1320
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Erik Steffl
 Fix For: 0.22.0

 Attachments: HDFS-1320-0.22-1.patch, HDFS-1320-0.22.patch


 Each LOG.debug(...) should be executed only if LOG.isDebugEnabled() is 
 true, in some cases it's expensive to construct the string that is being 
 printed to log. It's much easier to always use LOG.isDebugEnabled() because 
 it's easier to check (rather than in each case reason wheather it's 
 neccessary or not).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1320) Add LOG.isDebugEnabled() guard for each LOG.debug(...)

2010-08-09 Thread Erik Steffl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Steffl updated HDFS-1320:
--

Attachment: HDFS-1320-0.22-2.patch

 Add LOG.isDebugEnabled() guard for each LOG.debug(...)
 

 Key: HDFS-1320
 URL: https://issues.apache.org/jira/browse/HDFS-1320
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Erik Steffl
 Fix For: 0.22.0

 Attachments: HDFS-1320-0.22-1.patch, HDFS-1320-0.22-2.patch, 
 HDFS-1320-0.22.patch


 Each LOG.debug(...) should be executed only if LOG.isDebugEnabled() is 
 true, in some cases it's expensive to construct the string that is being 
 printed to log. It's much easier to always use LOG.isDebugEnabled() because 
 it's easier to check (rather than in each case reason wheather it's 
 neccessary or not).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1320) Add LOG.isDebugEnabled() guard for each LOG.debug(...)

2010-08-09 Thread Erik Steffl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896698#action_12896698
 ] 

Erik Steffl commented on HDFS-1320:
---

Patch HDFS-1320-0.22-2.patch fixes problems mentioned in the review:

1. All files you mentioned and few others are now patched (fixed my script that 
searches for calls to .debug() with no isDebugEnabled()).

2. BlockPlacementPolicyDefault.java: logr replaced by FSNamesystem.LOG

3. DFSUtil is already removed from both NameNode.java and DataNode.java

4. DFSClient is already removed from BlockTokenIdentifier

5. Removed FileStatus import from DFSOutputStream

6. They are already there, think I'll leave them there for consistency

 Add LOG.isDebugEnabled() guard for each LOG.debug(...)
 

 Key: HDFS-1320
 URL: https://issues.apache.org/jira/browse/HDFS-1320
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Erik Steffl
 Fix For: 0.22.0

 Attachments: HDFS-1320-0.22-1.patch, HDFS-1320-0.22-2.patch, 
 HDFS-1320-0.22.patch


 Each LOG.debug(...) should be executed only if LOG.isDebugEnabled() is 
 true, in some cases it's expensive to construct the string that is being 
 printed to log. It's much easier to always use LOG.isDebugEnabled() because 
 it's easier to check (rather than in each case reason wheather it's 
 neccessary or not).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1337) Unmatched file length makes append fail. Should we retry if a startBlockRecovery() fails?

2010-08-09 Thread Thanh Do (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thanh Do updated HDFS-1337:
---

Description: 
- Component: data node
 
- Version: 0.20-append
 
- Setup:
1) # disks / datanode = 3
2) # failures = 2
3) failure type = crash
4) When/where failure happens = (see below)
 
- Details:
Client writes to dn1-dn2-dn3. Write succeeds. We have blk_X_1001 in all dns.
Now client tries to append. It first calls dn1.recoverBlock().
This recoverBlock succeeds.  We have blk_X_1002 in all dns.
Suppose the pipeline is dn3-dn2-dn1. Client sends packet to dn3.
dn3 forwards the packet to dn2 and writes to its disk (i.e dn3's disk).
Now, *dn2 crashes*, so that dn1 has not received this packet yet.
Client calls dn1.recoverBlock() again, this time with dn3-dn1 in the pipeline.
dn1 then calls dn3.startBlockRecovery() which terminates writer thread in dn3,
get the *in memory* metadata info (i.e 512 byte length), and verifies that info 
with
the real file on disk (i.e 1024 byte length), hence the Exception.
(in this case, the block at dn3 is not finalized yet, and the 
FSDataset.setVisibleLength
has not been called, hence its visible in-memory length
is 512 byte, although its on-disk length is 1024.)
Therefore, from dn1's view, dn3 has some problem.
Now dn1 calls its own startBlockRecovery() successfully (because the on-disk
file length and memory file length match, both are 512 byte).
Now,
 + at dn1: blk_X_1003 (length 512)
 + at dn2: blk_X_1002 (length 512) 
 + at dn3: blk_X_1002 (length 1024)
 
dn1 also calls NN.commitSync (blk_X_1003, [dn1]), i.e only dn1 has a good 
replica.
After all:
- From NN point of view: dn1 is candidate for leaseRecovery
- From the client's view, dn1 is the only healthy node in the pipeline.
(it knows that by the result returned from recoverBlock).
Client starts sending a packet to dn1, now *dn1 crashes*, hence append fails.

- RE-READ: FAIL
Why? after all, dn1 and dn2 crashes. Only dn3 contains the block with GS 1002.
But NN sees blk_X_1003, because dn1 has successfully called 
commitBlockSync(blk_X_1003).
Hence, when reader asks to read the file, NN gives blk_X_1003,
and no alive dn contains that block with GS 1003.
 
- RE-APPEND with different client: FAIL
 + The file is under construction, and its holder is A1.
 
- NN.leaseRecovery(): FAIL
 + no alive target (i.e dn1, not dn3)
 + hence, as long as dn1 is not alive and the lease is not recovered, the 
file is unable to be appended
 + worse, even dn3 sends blockReport to NN and becomes target for lease 
recovery, 
 Lease recovery fails because:
  1) dn3 has block blk_X_1002 which has smaller GS
   than the block NN asks for,
  2) dn3 cannot contact dn1 which crashed

This bug was found by our Failure Testing Service framework:
http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-98.html
For questions, please email us: Thanh Do (than...@cs.wisc.edu) and 
Haryadi Gunawi (hary...@eecs.berkeley.edu)

  was:
- Component: data node
 
- Version: 0.20-append
 
- Setup:
1) # disks / datanode = 3
2) # failures = 2
3) failure type = crash
4) When/where failure happens = (see below)
 
- Details:
Client writes to dn1-dn2-dn3. Write succeeds. We have blk_X_1001 in all dns.
Now client tries to append. It first calls dn1.recoverBlock().
This recoverBlock succeeds.  We have blk_X_1002 in all dns.
Suppose the pipeline is dn3-dn2-dn1. Client sends packet to dn3.
dn3 forwards the packet to dn2 and writes to its disk (i.e dn3's disk).
Now, *dn2 crashes*, so that dn1 has not received this packet yet.
Client calls dn1.recoverBlock() again, this time with dn3-dn1 in the pipeline.
dn1 then calls dn3.startBlockRecovery() which terminates writer thread in dn3,
get the *in memory* metadata info (i.e 512 byte length), and verifies that info 
with
the real file on disk (i.e 1024 byte length), hence the Exception.
(in this case, the block at dn3 is not finalized yet, and the 
FSDataset.setVisibleLength
has not been called, hence its visible in-memory length
is 512 byte, although its on-disk length is 1024.)
Therefore, from dn1's view, dn3 has some problem.
Now dn1 calls its own startBlockRecovery() successfully (because the on-disk
file length and memory file length match, both are 512 byte).
Now,
 + at dn1: blk_X_1003 (length 512)
 + at dn2: blk_X_1002 (length 512) 
 + at dn3: blk_X_1002 (length 1024)
 
dn1 also calls NN.commitSync (blk_X_1003, [dn1]), i.e only dn1 has a good 
replica.
After all:
- From NN point of view: dn1 is candidate for leaseRecovery
- From the client's view, dn1 is the only healthy node in the pipeline.
(it knows that by the result returned from recoverBlock).
Client starts sending a packet to dn1, now *dn1 crashes*, hence append fails.

- RE-READ: FAIL
Why? after all, dn1 and dn2 crashes. Only dn3 contains the block with GS 1002.
But NN sees blk_X_1003, because dn1 has 

[jira] Created: (HDFS-1337) Unmatched file length makes append fail. Should we retry if a startBlockRecovery() fails?

2010-08-09 Thread Thanh Do (JIRA)
Unmatched file length makes append fail. Should we retry if a 
startBlockRecovery() fails?
-

 Key: HDFS-1337
 URL: https://issues.apache.org/jira/browse/HDFS-1337
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Thanh Do


- Component: data node
 
- Version: 0.20-append
 
- Setup:
1) # disks / datanode = 3
2) # failures = 2
3) failure type = crash
4) When/where failure happens = (see below)
 
- Details:
Client writes to dn1-dn2-dn3. Write succeeds. We have blk_X_1001 in all dns.
Now client tries to append. It first calls dn1.recoverBlock().
This recoverBlock succeeds.  We have blk_X_1002 in all dns.
Suppose the pipeline is dn3-dn2-dn1. Client sends packet to dn3.
dn3 forwards the packet to dn2 and writes to its disk (i.e dn3's disk).
Now, *dn2 crashes*, so that dn1 has not received this packet yet.
Client calls dn1.recoverBlock() again, this time with dn3-dn1 in the pipeline.
dn1 then calls dn3.startBlockRecovery() which terminates writer thread in dn3,
get the *in memory* metadata info (i.e 512 byte length), and verifies that info 
with
the real file on disk (i.e 1024 byte length), hence the Exception.
(in this case, the block at dn3 is not finalized yet, and the 
FSDataset.setVisibleLength
has not been called, hence its visible in-memory length
is 512 byte, although its on-disk length is 1024.)
Therefore, from dn1's view, dn3 has some problem.
Now dn1 calls its own startBlockRecovery() successfully (because the on-disk
file length and memory file length match, both are 512 byte).
Now,
 + at dn1: blk_X_1003 (length 512)
 + at dn2: blk_X_1002 (length 512) 
 + at dn3: blk_X_1002 (length 1024)
 
dn1 also calls NN.commitSync (blk_X_1003, [dn1]), i.e only dn1 has a good 
replica.
After all:
- From NN point of view: dn1 is candidate for leaseRecovery
- From the client's view, dn1 is the only healthy node in the pipeline.
(it knows that by the result returned from recoverBlock).
Client starts sending a packet to dn1, now *dn1 crashes*, hence append fails.

- RE-READ: FAIL
Why? after all, dn1 and dn2 crashes. Only dn3 contains the block with GS 1002.
But NN sees blk_X_1003, because dn1 has successfully called 
commitBlockSync(blk_X_1003).
Hence, when reader asks to read the file, NN gives blk_X_1003,
and no alive dn contains that block with GS 1003.
 
- RE-APPEND with different client: FAIL
 + The file is under construction, and its holder is A1.
 
- NN.leaseRecovery(): FAIL
 + no alive target (i.e dn1, not dn3)
 + hence, as long as dn1 is not alive and the lease is not recovered, the 
file is unable to be appended
 + worse, even dn3 sends blockReport to NN and becomes target for lease 
recovery, 
 Lease recovery fails because:
  1) dn3 has block blk_X_1002 which has smaller GS
   than the block NN asks for,
  2) dn3 cannot contact dn1 which crashed

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-397) Incorporate storage directories into EditLogFileInput/Output streams

2010-08-09 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-397:
-

Status: Open  (was: Patch Available)

 Incorporate storage directories into EditLogFileInput/Output streams
 

 Key: HDFS-397
 URL: https://issues.apache.org/jira/browse/HDFS-397
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: HADOOP-6001.patch, HADOOP-6001.patch, HADOOP-6001.patch, 
 HDFS-397.patch, HDFS-397.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1330) Make RPCs to DataNodes timeout

2010-08-09 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896715#action_12896715
 ] 

Hairong Kuang commented on HDFS-1330:
-

Thanks Sam for reviewing the patch.

Test results ran on my linux box are posted below:

ant test-patch:
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to i
 [exec] nclude 3 new or modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.

Ant test did not succeed. Failed tests included TestBlockRecovery, 
TestHDFSTrash(timeout), and TestBackupNode. But they seemed not related to this 
patch.

 Make RPCs to DataNodes timeout
 --

 Key: HDFS-1330
 URL: https://issues.apache.org/jira/browse/HDFS-1330
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: hdfsRpcTimeout.patch


 This jira aims to make client/datanode or datanode/datanode RPC to have a 
 timeout of DataNode#socketTimeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1334) open in HftpFileSystem does not add delegation tokens to the url.

2010-08-09 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HDFS-1334:
-

Hadoop Flags: [Reviewed]

+1

 open in HftpFileSystem does not add delegation tokens to the url.
 -

 Key: HDFS-1334
 URL: https://issues.apache.org/jira/browse/HDFS-1334
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1334.1.patch


 open method in HftpFileSystem uses ByteRangeInputStream for url connection. 
 But it does not add the delegation tokens, even if security is enabled, to 
 the url before passing it to the ByteRangeInputStream. Therefore request 
 fails if security is enabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1334) open in HftpFileSystem does not add delegation tokens to the url.

2010-08-09 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896731#action_12896731
 ] 

Jitendra Nath Pandey commented on HDFS-1334:


ant test was run manually. All tests pass except TestHDFSTrash which is 
unrelated.

test patch results

 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.

 open in HftpFileSystem does not add delegation tokens to the url.
 -

 Key: HDFS-1334
 URL: https://issues.apache.org/jira/browse/HDFS-1334
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1334.1.patch


 open method in HftpFileSystem uses ByteRangeInputStream for url connection. 
 But it does not add the delegation tokens, even if security is enabled, to 
 the url before passing it to the ByteRangeInputStream. Therefore request 
 fails if security is enabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1334) open in HftpFileSystem does not add delegation tokens to the url.

2010-08-09 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896732#action_12896732
 ] 

Jitendra Nath Pandey commented on HDFS-1334:


The patch was tested manually on the trunk.

 open in HftpFileSystem does not add delegation tokens to the url.
 -

 Key: HDFS-1334
 URL: https://issues.apache.org/jira/browse/HDFS-1334
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1334.1.patch


 open method in HftpFileSystem uses ByteRangeInputStream for url connection. 
 But it does not add the delegation tokens, even if security is enabled, to 
 the url before passing it to the ByteRangeInputStream. Therefore request 
 fails if security is enabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1335) HDFS side of HADOOP-6904: first step towards inter-version communications between dfs client and NameNode

2010-08-09 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12896748#action_12896748
 ] 

Hairong Kuang commented on HDFS-1335:
-

 Do you expect folks to update the implementation of ProtocolCompatible as 
 protocols evolve?
Yes that's exactly what I expect developers to update.

 Since there's more than one protocol in HDFS, do you expect to add more 
 methods to ProtocolCompatible for each protocol?
This is going to be done on need basis. For this jira, I intend to support only 
ClientProtocol (client  NameNode) compatibility.

 HDFS side of HADOOP-6904: first step towards inter-version communications 
 between dfs client and NameNode
 -

 Key: HDFS-1335
 URL: https://issues.apache.org/jira/browse/HDFS-1335
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.22.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: hdfsRpcVersion.patch


 The idea is that for getProtocolVersion, NameNode checks if the client and 
 server versions are compatible if the server version is greater than the 
 client version. If no, throws a VersionIncompatible exception; otherwise, 
 returns the server version.
 On the dfs client side, when creating a NameNode proxy, catches the 
 VersionMismatch exception and then checks if the client version and the 
 server version are compatible if the client version is greater than the 
 server version. If not compatible, throws exception VersionIncomptible; 
 otherwise, records the server version and continues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1334) open in HftpFileSystem does not add delegation tokens to the url.

2010-08-09 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1334:
--

Status: Resolved  (was: Patch Available)
Resolution: Fixed

I've committed this.  Thanks, Jitendra.  Resolving as fixed.

 open in HftpFileSystem does not add delegation tokens to the url.
 -

 Key: HDFS-1334
 URL: https://issues.apache.org/jira/browse/HDFS-1334
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1334.1.patch


 open method in HftpFileSystem uses ByteRangeInputStream for url connection. 
 But it does not add the delegation tokens, even if security is enabled, to 
 the url before passing it to the ByteRangeInputStream. Therefore request 
 fails if security is enabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.