[jira] [Updated] (HADOOP-8647) BlockCompressionStream won't work with BlockDecompressionStream when there are several write

2012-08-03 Thread clockfly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clockfly updated HADOOP-8647:
-

Attachment: TestBlockDecompressorStream.java

Unit Test.

 BlockCompressionStream won't work with BlockDecompressionStream when there 
 are several write
 

 Key: HADOOP-8647
 URL: https://issues.apache.org/jira/browse/HADOOP-8647
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: clockfly
Priority: Minor
 Attachments: TestBlockDecompressorStream.java


 BlockDecompressionStream can not read compressed data using 
 BlockCompressionStream when there are multiple writes to 
 BlockCompressionStream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8647) BlockCompressionStream won't work with BlockDecompressionStream when there are several write

2012-08-03 Thread clockfly (JIRA)
clockfly created HADOOP-8647:


 Summary: BlockCompressionStream won't work with 
BlockDecompressionStream when there are several write
 Key: HADOOP-8647
 URL: https://issues.apache.org/jira/browse/HADOOP-8647
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: clockfly
Priority: Minor
 Attachments: TestBlockDecompressorStream.java

BlockDecompressionStream can not read compressed data using 
BlockCompressionStream when there are multiple writes to BlockCompressionStream.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8441) Build bot timeout is too small

2012-08-03 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar resolved HADOOP-8441.
-

Resolution: Not A Problem

it seems to work fine these days.

 Build bot timeout is too small
 --

 Key: HADOOP-8441
 URL: https://issues.apache.org/jira/browse/HADOOP-8441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Blocker
  Labels: build-failure, qa

 QA Build bot timeout is set too low. It fails to make build in time and then 
 no results are posted to JIRA.
 See example
 https://builds.apache.org/job/PreCommit-HADOOP-Build/1040/console

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428110#comment-13428110
 ] 

Daryn Sharp commented on HADOOP-8644:
-

This appears to be regressing {{URLUtils.openConnection}} that sets connect and 
read timeouts?

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8630) rename isSingleSwitch() methods in new topo base class to isFlatTopology()

2012-08-03 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428207#comment-13428207
 ] 

Tsuyoshi OZAWA commented on HADOOP-8630:


Steve,

Thank you for you review. Okey, I'll fix it.
Should I send a pull request to your repository or attach the patch to this 
jira?

 rename isSingleSwitch() methods in new topo base class to isFlatTopology()
 --

 Key: HADOOP-8630
 URL: https://issues.apache.org/jira/browse/HADOOP-8630
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-8630.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The new topology logic that is not yet turned on in HDFS uses the method 
 {{isSingleSwitch()}} for implementations to declare whether or not they are 
 single switch. 
 The use of switch is an implementation issue; the big VM-based patch shows 
 that really it's about flat vs hierarchical, with Hadoop assuming that 
 subtrees in the hierarchy have better bandwidth (good) but correlated 
 failures (bad). 
 Renaming the method now -before it's fixed and used- is time time to do it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428225#comment-13428225
 ] 

Alejandro Abdelnur commented on HADOOP-8644:


@Daryn, I don't see how that is possible as {{URLUtils}} it is not using 
{{AuthenticatedURL}} class.

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428231#comment-13428231
 ] 

Daryn Sharp commented on HADOOP-8644:
-

That particular class/method doesn't necessarily have to be used, but we need 
to carry over the setting of timeouts.

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8441) Build bot timeout is too small

2012-08-03 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428251#comment-13428251
 ] 

Eli Collins commented on HADOOP-8441:
-

What build timeout are you referring to btw?

 Build bot timeout is too small
 --

 Key: HADOOP-8441
 URL: https://issues.apache.org/jira/browse/HADOOP-8441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Blocker
  Labels: build-failure, qa

 QA Build bot timeout is set too low. It fails to make build in time and then 
 no results are posted to JIRA.
 See example
 https://builds.apache.org/job/PreCommit-HADOOP-Build/1040/console

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8441) Build bot timeout is too small

2012-08-03 Thread Radim Kolar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428259#comment-13428259
 ] 

Radim Kolar commented on HADOOP-8441:
-

Hudson aborts build if it takes too long resulting in ERROR.

 Build bot timeout is too small
 --

 Key: HADOOP-8441
 URL: https://issues.apache.org/jira/browse/HADOOP-8441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Blocker
  Labels: build-failure, qa

 QA Build bot timeout is set too low. It fails to make build in time and then 
 no results are posted to JIRA.
 See example
 https://builds.apache.org/job/PreCommit-HADOOP-Build/1040/console

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428303#comment-13428303
 ] 

Alejandro Abdelnur commented on HADOOP-8644:


It makes sense, this should be done in all places where AuthenticatedURL is 
being used. But I think it should be done as part of a separate JIRA as this 
one is not introducing such regression. Makes sense?

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8648) libhadoop: native CRC32 validation crashes when blocksize=1

2012-08-03 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8648:


 Summary: libhadoop:  native CRC32 validation crashes when 
blocksize=1
 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize is 
set to 1.

{code}
12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
#
# JRE version: 6.0_29-b11
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
#
# An error report file with more information is saved as:
# /h/hs_err_pid24100.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#
Aborted
{code}

The Java CRC code works fine in this case.

Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
4-byte checksum for every byte. 
{code}
-rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
-rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
blk_4702510289566780538_1199.meta
{code}

However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when blocksize=1

2012-08-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428338#comment-13428338
 ] 

Todd Lipcon commented on HADOOP-8648:
-

you mean checksum chunk size, right? not blocksize.

 libhadoop:  native CRC32 validation crashes when blocksize=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8648:
-

Summary: libhadoop:  native CRC32 validation crashes when 
io.bytes.per.checksum=1  (was: libhadoop:  native CRC32 validation crashes when 
blocksize=1)

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428345#comment-13428345
 ] 

Colin Patrick McCabe commented on HADOOP-8648:
--

yeah, I meant to write chunk size (io.bytes.per.checksum).

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-03 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Attachment: HADOOP-7967.newapi.3.patch

# Added the provided javadocs patch
# Completely removed {{getDelegationTokens}} methods, even for 23.  Might as 
well find bugs in the upper stack components now than later...
# As suggested, {{addDelegationTokens(renewer,null)}} semantically behaves like 
the removed {{getDelegationTokens(renewer)}}
# Create {{TestFileSystemTokens}} in common and move most of the applicable 
token fetching tests into it from mapreduce.  Tests are now more comprehensive 
with greater coverage.
# Remove GETDELEGATIONTOKENS from httpfs since it was removed from webhdfs
# Downgraded visibility of {{FileSystem#getDelegationToken}} to protected
#* caused a lot of test updates
#* exposed multi-token bug in fetchdt
#* had to update webhdfs token renewal

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428377#comment-13428377
 ] 

Tom White commented on HADOOP-8581:
---

+1 looks good to me.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428384#comment-13428384
 ] 

Daryn Sharp commented on HADOOP-8644:
-

I suppose it can be a separate jira, but it needs to be a high prio too since 
the timeouts were added to address serious problems.

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-03 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-8649:


 Summary: ChecksumFileSystem should have an overriding 
implementation of listStatus(Path, PathFilter)
 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla


Currently, ChecksumFileSystem implements only listStatus(Path). The other form 
of listStatus(Path, PathFilter) is implemented by parent class FileSystem, and 
hence doesn't filter out check-sum files.

The implementation should use a composite filter of passed Filter and the 
Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8648:
-

Status: Patch Available  (was: Open)

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8650) /bin/hadoop-daemon.sh to add -f timeout arg for forced shutdowns

2012-08-03 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-8650:
--

 Summary: /bin/hadoop-daemon.sh to add -f timeout arg for 
forced shutdowns 
 Key: HADOOP-8650
 URL: https://issues.apache.org/jira/browse/HADOOP-8650
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.2.0-alpha
Reporter: Steve Loughran


Add a timeout for the daemon script to trigger a kill -9 if the clean shutdown 
fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8644) AuthenticatedURL should be able to use SSLFactory

2012-08-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428407#comment-13428407
 ] 

Alejandro Abdelnur commented on HADOOP-8644:


Filed JIRA HDFS-3761. Other than that, are we good with this JIRA?

 AuthenticatedURL should be able to use SSLFactory
 -

 Key: HADOOP-8644
 URL: https://issues.apache.org/jira/browse/HADOOP-8644
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8644.patch


 This is required to enable the use of HTTPS with SPNEGO using Hadoop 
 configured keystores. This is required by HADOOP-8581.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8648:
-

Attachment: HADOOP-8648.002.patch

In the test, re-create the dfsclient each time, to make sure the new settings 
take effect.  The value of DFS_BYTES_PER_CHECKSUM_KEY is cached in DfsClient.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8650) /bin/hadoop-daemon.sh to add -f timeout arg for forced shutdowns

2012-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428411#comment-13428411
 ] 

Steve Loughran commented on HADOOP-8650:


in HA environments, and other situations, you may want to forcibly shut down a 
hadoop service -even if it is hung. Currently, hadoop-daemon.sh sends a normal 
SIGTERM signal -one that the process picks up and reacts to.


If the process is completely hung, it is possible that this signal is not acted 
on, so it stays up. The only way to deal with this is by waiting a while, 
finding the pid and kill -9'ing it. This must be done by hand, or in an 
external script. The latter is brittle to changes in HADOOP_PID_DIR values, and 
requires everyone writing such scripts to code and test it themselves.

To replicate this: 
 # start a daemon: {{hadoop-daemon.sh start namenode}}
 # issue a {{kill -STOP pid}} to it's PID
 # try to stop the daemon via the {{hadoop-daemon.sh stop namenode}} command.
 # observe that the NN process remains present.

We could extend hadoop-daemon to support a -f timeout argument, which 
provides a timeout after which the process must be terminated, else a kill -9 
signal is issued.

 /bin/hadoop-daemon.sh to add -f timeout arg for forced shutdowns 
 -

 Key: HADOOP-8650
 URL: https://issues.apache.org/jira/browse/HADOOP-8650
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.2.0-alpha
Reporter: Steve Loughran

 Add a timeout for the daemon script to trigger a kill -9 if the clean 
 shutdown fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8630) rename isSingleSwitch() methods in new topo base class to isFlatTopology()

2012-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428414#comment-13428414
 ] 

Steve Loughran commented on HADOOP-8630:


patch + a pull is best. Note that that specific branch on github includes a 
place in DFS where it's actually used -that bit isn't part of the patch for 
common; two patches need to be created from the source tree

 rename isSingleSwitch() methods in new topo base class to isFlatTopology()
 --

 Key: HADOOP-8630
 URL: https://issues.apache.org/jira/browse/HADOOP-8630
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-8630.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The new topology logic that is not yet turned on in HDFS uses the method 
 {{isSingleSwitch()}} for implementations to declare whether or not they are 
 single switch. 
 The use of switch is an implementation issue; the big VM-based patch shows 
 that really it's about flat vs hierarchical, with Hadoop assuming that 
 subtrees in the hierarchy have better bandwidth (good) but correlated 
 failures (bad). 
 Renaming the method now -before it's fixed and used- is time time to do it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter)

2012-08-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8649:


Assignee: Karthik Kambatla

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter)
 ---

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla

 Currently, ChecksumFileSystem implements only listStatus(Path). The other 
 form of listStatus(Path, PathFilter) is implemented by parent class 
 FileSystem, and hence doesn't filter out check-sum files.
 The implementation should use a composite filter of passed Filter and the 
 Checksum filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428509#comment-13428509
 ] 

Hadoop QA commented on HADOOP-8648:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539101/HADOOP-8648.002.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.util.TestDataChecksum
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1248//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1248//console

This message is automatically generated.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428514#comment-13428514
 ] 

Todd Lipcon commented on HADOOP-8648:
-

{code}
+  if (unlikely((uintptr_t)(sums_addr + sums_offset)  0x3)) {
+char buf[256];
+snprintf(buf, sizeof(buf), sums_addr + sums_offset must be aligned 
+ to a 4-byte boundary!  sums_addr = %p, sums_offset = %d.,
+ sums_addr, sums_offset);
+THROW(env, java/lang/IllegalArgumentException, buf);
+return;
+  }
{code}

Is that true? I don't think we always fit this requirement. I thought the 
native code worked even with unaligned pointers.



- Can you add a direct unit test to TestDataChecksum for these cases? I don't 
think testing this from HDFS is necessarily the best way when you could trigger 
the condition explicitly in the checksum code.

I'm also a little confused by the changes in the C code. eg:
{code}
- *   block_size : The size of each block in bytes.
+ *   block_size : The size of each block in bytes.  Must be = 8
{code}

Why is that the case? It seems like the code path should still work fine there, 
just setting counter = 0 and skipping the initial while loop.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13428537#comment-13428537
 ] 

Hadoop QA commented on HADOOP-7967:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12539093/HADOOP-7967.newapi.3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 16 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.hdfs.TestDFSClientRetries

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1249//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1249//console

This message is automatically generated.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8651) Error reading task output Server returned HTTP response code: 400 for URL: http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_000000_0f

2012-08-03 Thread jiafeng.zhang (JIRA)
jiafeng.zhang created HADOOP-8651:
-

 Summary:  Error reading task output Server returned HTTP response 
code: 400 for URL: 
http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stdout
 Key: HADOOP-8651
 URL: https://issues.apache.org/jira/browse/HADOOP-8651
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1
 Environment: hadoop-0.23.1 JDK_1.6.0_31
Centos-6.0

Reporter: jiafeng.zhang
 Fix For: 0.23.1


bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.1.jar 
teragen 100 /in_test
12/08/04 11:01:47 WARN conf.Configuration: fs.default.name is deprecated. 
Instead, use fs.defaultFS
12/08/04 11:01:47 WARN conf.Configuration: mapred.used.genericoptionsparser is 
deprecated. Instead, use mapreduce.client.genericoptionsparser.used
12/08/04 11:01:49 INFO terasort.TeraSort: Generating 100 using 2
12/08/04 11:01:50 INFO mapreduce.JobSubmitter: number of splits:2
12/08/04 11:01:52 INFO mapred.ResourceMgrDelegate: Submitted application 
application_1344047400780_0002 to ResourceManager at 
hadoop01/192.168.37.101:8032
12/08/04 11:01:52 INFO mapreduce.Job: The url to track the job: 
http://hadoop01:50030/proxy/application_1344047400780_0002/
12/08/04 11:01:52 INFO mapreduce.Job: Running job: job_1344047400780_0002
12/08/04 11:02:11 INFO mapreduce.Job: Job job_1344047400780_0002 running in 
uber mode : false
12/08/04 11:02:11 INFO mapreduce.Job:  map 0% reduce 0%
12/08/04 11:02:19 INFO mapreduce.Job: Task Id : 
attempt_1344047400780_0002_m_00_0, Status : FAILED
12/08/04 11:02:20 WARN mapreduce.Job: Error reading task output Server returned 
HTTP response code: 400 for URL: 
http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stdout
12/08/04 11:02:20 WARN mapreduce.Job: Error reading task output Server returned 
HTTP response code: 400 for URL: 
http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stderr
12/08/04 11:02:25 INFO mapreduce.Job:  map 9% reduce 0%
12/08/04 11:02:30 INFO mapreduce.Job:  map 13% reduce 0%
12/08/04 11:02:33 INFO mapreduce.Job:  map 15% reduce 0%
12/08/04 11:02:40 INFO mapreduce.Job:  map 17% reduce 0%
12/08/04 11:02:46 INFO mapreduce.Job:  map 18% reduce 0%
12/08/04 11:02:52 INFO mapreduce.Job:  map 25% reduce 0%
12/08/04 11:02:56 INFO mapreduce.Job:  map 29% reduce 0%
12/08/04 11:03:01 INFO mapreduce.Job:  map 31% reduce 0%
12/08/04 11:03:08 INFO mapreduce.Job:  map 34% reduce 0%
12/08/04 11:03:11 INFO mapreduce.Job:  map 38% reduce 0%
12/08/04 11:03:14 INFO mapreduce.Job:  map 42% reduce 0%
12/08/04 11:03:15 INFO mapreduce.Job:  map 46% reduce 0%
12/08/04 11:03:17 INFO mapreduce.Job:  map 51% reduce 0%
12/08/04 11:03:18 INFO mapreduce.Job:  map 55% reduce 0%
12/08/04 11:03:20 INFO mapreduce.Job:  map 56% reduce 0%
12/08/04 11:03:24 INFO mapreduce.Job:  map 58% reduce 0%
12/08/04 11:03:25 INFO mapreduce.Job:  map 59% reduce 0%
12/08/04 11:03:26 INFO mapreduce.Job:  map 62% reduce 0%
12/08/04 11:03:28 INFO mapreduce.Job:  map 67% reduce 0%
12/08/04 11:03:29 INFO mapreduce.Job:  map 71% reduce 0%
12/08/04 11:03:32 INFO mapreduce.Job:  map 73% reduce 0%
12/08/04 11:03:33 INFO mapreduce.Job:  map 74% reduce 0%
12/08/04 11:03:35 INFO mapreduce.Job:  map 76% reduce 0%
12/08/04 11:03:36 INFO mapreduce.Job:  map 78% reduce 0%
12/08/04 11:03:38 INFO mapreduce.Job:  map 79% reduce 0%
12/08/04 11:03:39 INFO mapreduce.Job:  map 81% reduce 0%
12/08/04 11:03:41 INFO mapreduce.Job:  map 84% reduce 0%
12/08/04 11:03:44 INFO mapreduce.Job:  map 87% reduce 0%
12/08/04 11:03:48 INFO mapreduce.Job:  map 90% reduce 0%
12/08/04 11:03:51 INFO mapreduce.Job:  map 100% reduce 0%
12/08/04 11:03:52 INFO mapreduce.Job: Job job_1344047400780_0002 completed 
successfully
12/08/04 11:03:52 INFO mapreduce.Job: Counters: 28
File System Counters
FILE: Number of bytes read=240
FILE: Number of bytes written=118412
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=167
HDFS: Number of bytes written=1
HDFS: Number of read operations=8
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Job Counters 
Failed map tasks=1
Launched map tasks=3
Other local map tasks=3
Total time spent by all maps in occupied slots (ms)=193607
Map-Reduce Framework
Map input records=100
Map output records=100
Input split bytes=167
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0

[jira] [Resolved] (HADOOP-8651) Error reading task output Server returned HTTP response code: 400 for URL: http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_000000_0

2012-08-03 Thread jiafeng.zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiafeng.zhang resolved HADOOP-8651.
---

Resolution: Invalid

  Error reading task output Server returned HTTP response code: 400 for URL: 
 http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stdout
 -

 Key: HADOOP-8651
 URL: https://issues.apache.org/jira/browse/HADOOP-8651
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1
 Environment: hadoop-0.23.1 JDK_1.6.0_31
 Centos-6.0
Reporter: jiafeng.zhang
 Fix For: 0.23.1


 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.1.jar 
 teragen 100 /in_test
 12/08/04 11:01:47 WARN conf.Configuration: fs.default.name is deprecated. 
 Instead, use fs.defaultFS
 12/08/04 11:01:47 WARN conf.Configuration: mapred.used.genericoptionsparser 
 is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
 12/08/04 11:01:49 INFO terasort.TeraSort: Generating 100 using 2
 12/08/04 11:01:50 INFO mapreduce.JobSubmitter: number of splits:2
 12/08/04 11:01:52 INFO mapred.ResourceMgrDelegate: Submitted application 
 application_1344047400780_0002 to ResourceManager at 
 hadoop01/192.168.37.101:8032
 12/08/04 11:01:52 INFO mapreduce.Job: The url to track the job: 
 http://hadoop01:50030/proxy/application_1344047400780_0002/
 12/08/04 11:01:52 INFO mapreduce.Job: Running job: job_1344047400780_0002
 12/08/04 11:02:11 INFO mapreduce.Job: Job job_1344047400780_0002 running in 
 uber mode : false
 12/08/04 11:02:11 INFO mapreduce.Job:  map 0% reduce 0%
 12/08/04 11:02:19 INFO mapreduce.Job: Task Id : 
 attempt_1344047400780_0002_m_00_0, Status : FAILED
 12/08/04 11:02:20 WARN mapreduce.Job: Error reading task output Server 
 returned HTTP response code: 400 for URL: 
 http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stdout
 12/08/04 11:02:20 WARN mapreduce.Job: Error reading task output Server 
 returned HTTP response code: 400 for URL: 
 http://hadoop03:8080/tasklog?plaintext=trueattemptid=attempt_1344047400780_0002_m_00_0filter=stderr
 12/08/04 11:02:25 INFO mapreduce.Job:  map 9% reduce 0%
 12/08/04 11:02:30 INFO mapreduce.Job:  map 13% reduce 0%
 12/08/04 11:02:33 INFO mapreduce.Job:  map 15% reduce 0%
 12/08/04 11:02:40 INFO mapreduce.Job:  map 17% reduce 0%
 12/08/04 11:02:46 INFO mapreduce.Job:  map 18% reduce 0%
 12/08/04 11:02:52 INFO mapreduce.Job:  map 25% reduce 0%
 12/08/04 11:02:56 INFO mapreduce.Job:  map 29% reduce 0%
 12/08/04 11:03:01 INFO mapreduce.Job:  map 31% reduce 0%
 12/08/04 11:03:08 INFO mapreduce.Job:  map 34% reduce 0%
 12/08/04 11:03:11 INFO mapreduce.Job:  map 38% reduce 0%
 12/08/04 11:03:14 INFO mapreduce.Job:  map 42% reduce 0%
 12/08/04 11:03:15 INFO mapreduce.Job:  map 46% reduce 0%
 12/08/04 11:03:17 INFO mapreduce.Job:  map 51% reduce 0%
 12/08/04 11:03:18 INFO mapreduce.Job:  map 55% reduce 0%
 12/08/04 11:03:20 INFO mapreduce.Job:  map 56% reduce 0%
 12/08/04 11:03:24 INFO mapreduce.Job:  map 58% reduce 0%
 12/08/04 11:03:25 INFO mapreduce.Job:  map 59% reduce 0%
 12/08/04 11:03:26 INFO mapreduce.Job:  map 62% reduce 0%
 12/08/04 11:03:28 INFO mapreduce.Job:  map 67% reduce 0%
 12/08/04 11:03:29 INFO mapreduce.Job:  map 71% reduce 0%
 12/08/04 11:03:32 INFO mapreduce.Job:  map 73% reduce 0%
 12/08/04 11:03:33 INFO mapreduce.Job:  map 74% reduce 0%
 12/08/04 11:03:35 INFO mapreduce.Job:  map 76% reduce 0%
 12/08/04 11:03:36 INFO mapreduce.Job:  map 78% reduce 0%
 12/08/04 11:03:38 INFO mapreduce.Job:  map 79% reduce 0%
 12/08/04 11:03:39 INFO mapreduce.Job:  map 81% reduce 0%
 12/08/04 11:03:41 INFO mapreduce.Job:  map 84% reduce 0%
 12/08/04 11:03:44 INFO mapreduce.Job:  map 87% reduce 0%
 12/08/04 11:03:48 INFO mapreduce.Job:  map 90% reduce 0%
 12/08/04 11:03:51 INFO mapreduce.Job:  map 100% reduce 0%
 12/08/04 11:03:52 INFO mapreduce.Job: Job job_1344047400780_0002 completed 
 successfully
 12/08/04 11:03:52 INFO mapreduce.Job: Counters: 28
 File System Counters
 FILE: Number of bytes read=240
 FILE: Number of bytes written=118412
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=167
 HDFS: Number of bytes written=1
 HDFS: Number of read operations=8
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=4
 Job Counters 
 Failed map tasks=1