[jira] [Updated] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8395:


  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273855#comment-13273855
 ] 

Hudson commented on HADOOP-8395:


Integrated in Hadoop-Common-trunk-Commit #2237 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2237/])
HADOOP-8395. Text shell command unnecessarily demands that a SequenceFile's 
key class be WritableComparable (harsh) (Revision 1337449)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337449
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273857#comment-13273857
 ] 

Hudson commented on HADOOP-8395:


Integrated in Hadoop-Hdfs-trunk-Commit #2311 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2311/])
HADOOP-8395. Text shell command unnecessarily demands that a SequenceFile's 
key class be WritableComparable (harsh) (Revision 1337449)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337449
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273859#comment-13273859
 ] 

Harsh J commented on HADOOP-8395:
-

Committed to trunk.

 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273869#comment-13273869
 ] 

Hudson commented on HADOOP-8395:


Integrated in Hadoop-Mapreduce-trunk-Commit #2254 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2254/])
HADOOP-8395. Text shell command unnecessarily demands that a SequenceFile's 
key class be WritableComparable (harsh) (Revision 1337449)

 Result = ABORTED
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337449
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8396) DataStreamer, OutOfMemoryError, unable to create new native thread

2012-05-12 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273873#comment-13273873
 ] 

Catalin Alexandru Zamfir commented on HADOOP-8396:
--

I monitored the code with htop. Over time it grew from: Tasks: 35, 147 thr, 1 
running to Tasks: 36, 3475 thr, 1 running. Once I killed it via kill the 
number of thr kept dropping to the same 147. So it seems it's a direct 
relation between the fact that the application allocates a big number of, I 
guess, native threads but does not kill them when done.

Also, I'm calling Runtime.getRuntime ().gc () every time the count of open 
streams is bigger than 5. Also, I'm explicitly flushing and closing these 
streams before removing them and before running the getRuntime ().gc () method. 
I'm not using any specific GC strategy, but the default one defined by 
openjdk-6. There are no other GC parameters on the command line when the 
program is run.

I'll be profiling the code today, using jmap and post the results here.

The output of cat /proc/sys/kernel/threads-max is: 48056
Still, the code only gets to about 3/4/5.000 thr (threads) in htop, reaches 
about 500M written records (or less by a few millions) and dies.

Here's a dump of jmap -histo:live pid:

 num #instances #bytes  class name
--
   1:   1303640   96984920  [B
   2:976162   69580696  [C
   3:648949   31149552  java.nio.HeapByteBuffer
   4:647505   31080240  java.nio.HeapCharBuffer
   5:533222   12797328  java.util.HashMap$Entry
   6:481595   11558280  java.lang.String
   7:  80864556064  [I
   8: 298053901240  constMethodKlass
   9:1770602832960  java.lang.Long
  10: 298052388304  methodKlass
  11: 588632354520  sun.misc.FloatingDecimal
  12: 12097168  [Lorg.h2.util.CacheObject;
  13: 506742041760  symbolKlass


Using watch jmap -histo:live to see reactions i get this on the CLI, every 
few million records:
OpenJDK Server VM warning: GC locker is held; pre-dump GC was skipped

Also, I see the [B class name, alternating between 20MB and 90MB, with both 
of them growing constantly over time.
Also, after running the code for 15 minutes, Eden space and PS Old 
Generation started growing like crazy. Eden space started with an acceptable 
25MB while PS Old Generation something small also (10/25MB, can't remember).

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize  = 792723456 (756.0MB)
   NewSize  = 1048576 (1.0MB)
   MaxNewSize   = 4294901760 (4095.9375MB)
   OldSize  = 4194304 (4.0MB)
   NewRatio = 2
   SurvivorRatio= 8
   PermSize = 16777216 (16.0MB)
   MaxPermSize  = 134217728 (128.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 260177920 (248.125MB)
   used = 104665056 (99.81637573242188MB)
   free = 155512864 (148.30862426757812MB)
   40.22826225991814% used
From Space:
   capacity = 1441792 (1.375MB)
   used = 1409864 (1.3445510864257812MB)
   free = 31928 (0.03044891357421875MB)
   97.78553355823864% used
To Space:
   capacity = 1966080 (1.875MB)
   used = 0 (0.0MB)
   free = 1966080 (1.875MB)
   0.0% used
PS Old Generation
   capacity = 528482304 (504.0MB)
   used = 31693784 (30.225547790527344MB)
   free = 496788520 (473.77445220947266MB)
   5.9971324981205045% used
PS Perm Generation
   capacity = 16777216 (16.0MB)
   used = 13510752 (12.884857177734375MB)
   free = 3266464 (3.115142822265625MB)
   80.53035736083984% used

Hope  this ton of information helps you.

 DataStreamer, OutOfMemoryError, unable to create new native thread
 --

 Key: HADOOP-8396
 URL: https://issues.apache.org/jira/browse/HADOOP-8396
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.2
 Environment: Ubuntu 64bit, 4GB of RAM, Core Duo processors, commodity 
 hardware.
Reporter: Catalin Alexandru Zamfir
Priority: Blocker
  Labels: DataStreamer, I/O, OutOfMemoryError, ResponseProcessor, 
 hadoop,, leak, memory, rpc,

 We're trying to write about 1 few billion records, via Avro. When we got 
 this error, that's unrelated to our code:
 10725984 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:58:43.290 # 
 Written: 52100 records
 Exception in thread DataStreamer for file 
 /Streams/Cubed/Stuff/objGame/aRandomGame/objType/aRandomType/2012/05/11/20/29/Shard.avro
  block blk_3254486396346586049_75838 java.lang.OutOfMemoryError: unable to 
 create new native thread
 at java.lang.Thread.start0(Native 

[jira] [Commented] (HADOOP-8396) DataStreamer, OutOfMemoryError, unable to create new native thread

2012-05-12 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273881#comment-13273881
 ] 

Catalin Alexandru Zamfir commented on HADOOP-8396:
--

Reading this article: http://blog.egilh.com/2006/06/2811aspx.html given that 
the latest JVM allocates about 1M per thread, means that the 3.000/4.000 
threads are consistent to the node I'm running this on which has about 3GB 
available for user-space. In theory I could reduce the stack size for native 
threads via -Xss, but that would only increase the number of threads, without 
actually resolving the problem. I think the problem is that Hadoop should let 
go of native threads that have already written their data to the HDFS. And I've 
checked, after writing a few million records, executing a reader class on 
that data, returns the data, meaning Hadoop got to write these to the HDFS, but 
watching htop the number of threads and memory for this code kept increasing 
and only when writing started. We're writing from one single thread (main).

Hadoop should let go of native threads or instruct the JVM to loose these 
threads once it knows it's written the corresponding data.

 DataStreamer, OutOfMemoryError, unable to create new native thread
 --

 Key: HADOOP-8396
 URL: https://issues.apache.org/jira/browse/HADOOP-8396
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.2
 Environment: Ubuntu 64bit, 4GB of RAM, Core Duo processors, commodity 
 hardware.
Reporter: Catalin Alexandru Zamfir
Priority: Blocker
  Labels: DataStreamer, I/O, OutOfMemoryError, ResponseProcessor, 
 hadoop,, leak, memory, rpc,

 We're trying to write about 1 few billion records, via Avro. When we got 
 this error, that's unrelated to our code:
 10725984 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:58:43.290 # 
 Written: 52100 records
 Exception in thread DataStreamer for file 
 /Streams/Cubed/Stuff/objGame/aRandomGame/objType/aRandomType/2012/05/11/20/29/Shard.avro
  block blk_3254486396346586049_75838 java.lang.OutOfMemoryError: unable to 
 create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:657)
 at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:612)
 at 
 org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
 at org.apache.hadoop.ipc.Client.call(Client.java:1046)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at $Proxy8.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at 
 org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:160)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3117)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
 10746169 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:59:03.474 # 
 Written: 52200 records
 Exception in thread ResponseProcessor for block 
 blk_4201760269657070412_73948 java.lang.OutOfMemoryError
 at sun.misc.Unsafe.allocateMemory(Native Method)
 at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:117)
 at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:305)
 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:75)
 at sun.nio.ch.IOUtil.read(IOUtil.java:223)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
 at 
 org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readLong(DataInputStream.java:416)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2964)
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 32 bytes for intptr_t 
 in 
 /build/buildd/openjdk-6-6b23~pre11/build/openjdk/hotspot/src/share/vm/runtime/deoptimization.cpp
 [thread 1587264368 also had an 

[jira] [Commented] (HADOOP-8396) DataStreamer, OutOfMemoryError, unable to create new native thread

2012-05-12 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273883#comment-13273883
 ] 

Uma Maheswara Rao G commented on HADOOP-8396:
-

{quote}
nd I've checked, after writing a few million records, executing a reader 
class on that data, returns the data, meaning Hadoop got to write these to the 
HDFS, but watching htop the number of threads and memory for this code kept 
increasing and only when writing started. We're writing from one single thread 
(main).
{quote}
You are not closing the file from application once your data write completed? 
When you open the stream to DFS, user only will start writing the data on that 
stream. Whatever data user writes, DataStreamer will writes to DNs. User only 
will will know, whether stream can be closed or not. Hadoop Client can not 
assume that user will not write any more data on that stream. Because, it is 
still possible that user can write some more data on that stream if stream 
still opens. Once that is closed automatically Streamer threads will exit. I am 
not sure you are about this lines or some other. Please correct if I understand 
ur point wrongly.

 DataStreamer, OutOfMemoryError, unable to create new native thread
 --

 Key: HADOOP-8396
 URL: https://issues.apache.org/jira/browse/HADOOP-8396
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.2
 Environment: Ubuntu 64bit, 4GB of RAM, Core Duo processors, commodity 
 hardware.
Reporter: Catalin Alexandru Zamfir
Priority: Blocker
  Labels: DataStreamer, I/O, OutOfMemoryError, ResponseProcessor, 
 hadoop,, leak, memory, rpc,

 We're trying to write about 1 few billion records, via Avro. When we got 
 this error, that's unrelated to our code:
 10725984 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:58:43.290 # 
 Written: 52100 records
 Exception in thread DataStreamer for file 
 /Streams/Cubed/Stuff/objGame/aRandomGame/objType/aRandomType/2012/05/11/20/29/Shard.avro
  block blk_3254486396346586049_75838 java.lang.OutOfMemoryError: unable to 
 create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:657)
 at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:612)
 at 
 org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
 at org.apache.hadoop.ipc.Client.call(Client.java:1046)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at $Proxy8.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at 
 org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:160)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3117)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
 10746169 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:59:03.474 # 
 Written: 52200 records
 Exception in thread ResponseProcessor for block 
 blk_4201760269657070412_73948 java.lang.OutOfMemoryError
 at sun.misc.Unsafe.allocateMemory(Native Method)
 at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:117)
 at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:305)
 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:75)
 at sun.nio.ch.IOUtil.read(IOUtil.java:223)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
 at 
 org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readLong(DataInputStream.java:416)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2964)
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 32 bytes for intptr_t 
 in 
 /build/buildd/openjdk-6-6b23~pre11/build/openjdk/hotspot/src/share/vm/runtime/deoptimization.cpp
 [thread 1587264368 also had an error]
 [thread 

[jira] [Commented] (HADOOP-8396) DataStreamer, OutOfMemoryError, unable to create new native thread

2012-05-12 Thread Catalin Alexandru Zamfir (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273905#comment-13273905
 ] 

Catalin Alexandru Zamfir commented on HADOOP-8396:
--

Mark it as invalid please. We've found the culprit. The base writer was opening 
files but not closing them. It assumed that FileSystem.create (path, false) 
would append. A misinterpretation of the docs. We found FileSystem.append 
which does exactly what must be done in case a file exists. We were explicitly 
flushing and closing the streams. When we added a check to see if the file 
existed or not and opened a FSDataOutputStream via append, the number of 
threads and consumed memory kept well between 167 and 247, whenever our 
'flushing  closing scheme went in.

 DataStreamer, OutOfMemoryError, unable to create new native thread
 --

 Key: HADOOP-8396
 URL: https://issues.apache.org/jira/browse/HADOOP-8396
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.2
 Environment: Ubuntu 64bit, 4GB of RAM, Core Duo processors, commodity 
 hardware.
Reporter: Catalin Alexandru Zamfir
Priority: Blocker
  Labels: DataStreamer, I/O, OutOfMemoryError, ResponseProcessor, 
 hadoop,, leak, memory, rpc,

 We're trying to write about 1 few billion records, via Avro. When we got 
 this error, that's unrelated to our code:
 10725984 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:58:43.290 # 
 Written: 52100 records
 Exception in thread DataStreamer for file 
 /Streams/Cubed/Stuff/objGame/aRandomGame/objType/aRandomType/2012/05/11/20/29/Shard.avro
  block blk_3254486396346586049_75838 java.lang.OutOfMemoryError: unable to 
 create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:657)
 at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:612)
 at 
 org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
 at org.apache.hadoop.ipc.Client.call(Client.java:1046)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at $Proxy8.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at 
 org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:160)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3117)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
 10746169 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:59:03.474 # 
 Written: 52200 records
 Exception in thread ResponseProcessor for block 
 blk_4201760269657070412_73948 java.lang.OutOfMemoryError
 at sun.misc.Unsafe.allocateMemory(Native Method)
 at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:117)
 at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:305)
 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:75)
 at sun.nio.ch.IOUtil.read(IOUtil.java:223)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
 at 
 org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readLong(DataInputStream.java:416)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2964)
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 32 bytes for intptr_t 
 in 
 /build/buildd/openjdk-6-6b23~pre11/build/openjdk/hotspot/src/share/vm/runtime/deoptimization.cpp
 [thread 1587264368 also had an error]
 [thread 309168 also had an error]
 [thread 1820371824 also had an error]
 [thread 1343454064 also had an error]
 [thread 1345444720 also had an error]
 # An error report file with more information is saved as:
 # [thread 1345444720 also had an error]
 [thread -1091290256 also had an error]
 [thread 678165360 also had an error]
 [thread 678497136 also had an error]
 [thread 675511152 also had an error]
 

[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273909#comment-13273909
 ] 

Harsh J commented on HADOOP-8323:
-

Any further comments on the test and javadocs? If not, given that it is a 
trivial non-breaking addition I shall commit it in by Monday.

 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2012-05-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273911#comment-13273911
 ] 

Harsh J commented on HADOOP-1381:
-

Any further comments on the addition?

 The distance between sync blocks in SequenceFiles should be configurable 
 rather than hard coded to 2000 bytes
 -

 Key: HADOOP-1381
 URL: https://issues.apache.org/jira/browse/HADOOP-1381
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.22.0
Reporter: Owen O'Malley
Assignee: Harsh J
 Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
 HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
 HADOOP-1381.r5.diff


 Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
 better if it was configurable with a much higher default (1mb or so?).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8396) DataStreamer, OutOfMemoryError, unable to create new native thread

2012-05-12 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HADOOP-8396.
-

Resolution: Invalid

Marking it as Invalid.

 DataStreamer, OutOfMemoryError, unable to create new native thread
 --

 Key: HADOOP-8396
 URL: https://issues.apache.org/jira/browse/HADOOP-8396
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.2
 Environment: Ubuntu 64bit, 4GB of RAM, Core Duo processors, commodity 
 hardware.
Reporter: Catalin Alexandru Zamfir
Priority: Blocker
  Labels: DataStreamer, I/O, OutOfMemoryError, ResponseProcessor, 
 hadoop,, leak, memory, rpc,

 We're trying to write about 1 few billion records, via Avro. When we got 
 this error, that's unrelated to our code:
 10725984 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:58:43.290 # 
 Written: 52100 records
 Exception in thread DataStreamer for file 
 /Streams/Cubed/Stuff/objGame/aRandomGame/objType/aRandomType/2012/05/11/20/29/Shard.avro
  block blk_3254486396346586049_75838 java.lang.OutOfMemoryError: unable to 
 create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:657)
 at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:612)
 at 
 org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
 at org.apache.hadoop.ipc.Client.call(Client.java:1046)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at $Proxy8.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at 
 org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:160)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3117)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
 10746169 [Main] INFO net.gameloft.RnD.Hadoop.App - ## At: 2:59:03.474 # 
 Written: 52200 records
 Exception in thread ResponseProcessor for block 
 blk_4201760269657070412_73948 java.lang.OutOfMemoryError
 at sun.misc.Unsafe.allocateMemory(Native Method)
 at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:117)
 at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:305)
 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:75)
 at sun.nio.ch.IOUtil.read(IOUtil.java:223)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
 at 
 org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.DataInputStream.readFully(DataInputStream.java:195)
 at java.io.DataInputStream.readLong(DataInputStream.java:416)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2964)
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 32 bytes for intptr_t 
 in 
 /build/buildd/openjdk-6-6b23~pre11/build/openjdk/hotspot/src/share/vm/runtime/deoptimization.cpp
 [thread 1587264368 also had an error]
 [thread 309168 also had an error]
 [thread 1820371824 also had an error]
 [thread 1343454064 also had an error]
 [thread 1345444720 also had an error]
 # An error report file with more information is saved as:
 # [thread 1345444720 also had an error]
 [thread -1091290256 also had an error]
 [thread 678165360 also had an error]
 [thread 678497136 also had an error]
 [thread 675511152 also had an error]
 [thread 1385937776 also had an error]
 [thread 911969136 also had an error]
 [thread -1086207120 also had an error]
 [thread -1088251024 also had an error]
 [thread -1088914576 also had an error]
 [thread -1086870672 also had an error]
 [thread 441797488 also had an error][thread 445778800 also had an error]
 [thread 440400752 also had an error]
 [thread 444119920 also had an error][thread 1151298416 also had an error]
 [thread 443124592 also had an error]
 [thread 1152625520 also had an error]
 [thread 913628016 also had an error]
 [thread -1095345296 also had an error][thread 

[jira] [Updated] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6801:


Attachment: HADOOP-6801.r3.diff

Updated patch.

- Improved docs.
- Added more javadoc links to right places.
- Added deprecated and new property tests for graceful change.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8297) Writable javadocs don't carry default constructor

2012-05-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273964#comment-13273964
 ] 

Harsh J commented on HADOOP-8297:
-

Given that this is a trivial patch that just fixes javadocs for developers, if 
no one has further comments on the helpful changes introduced here I will 
commit this in by monday.

 Writable javadocs don't carry default constructor
 -

 Key: HADOOP-8297
 URL: https://issues.apache.org/jira/browse/HADOOP-8297
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8297.patch


 The Writable API docs have a custom writable example but doesn't carry a 
 default constructor in it. Apparently a default constructor is required and 
 hence the example ought to carry it for benefit of the reader/paster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273966#comment-13273966
 ] 

Harsh J commented on HADOOP-8358:
-

Any further comments on the patch? Its quite trivial a change and helps remove 
unnecessary WARN noise.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273967#comment-13273967
 ] 

Hadoop QA commented on HADOOP-6801:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526628/HADOOP-6801.r3.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javac.  The applied patch generated 1938 javac compiler warnings (more 
than the trunk's current 1934 warnings).

-1 javadoc.  The javadoc tool appears to have generated 8 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/989//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/989//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/989//console

This message is automatically generated.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273970#comment-13273970
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Hdfs-0.23-Build #254 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/254/])
svn merge -c 1332775 FIXES: HADOOP-8317. Update maven-assembly-plugin to 
2.3 - fix build on FreeBSD (Radim Kolar via bobby) (Revision 1337209)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337209
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-project/pom.xml
* /hadoop/common/branches/branch-0.23/pom.xml


 Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
 --

 Key: HADOOP-8317
 URL: https://issues.apache.org/jira/browse/HADOOP-8317
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3, 2.0.0
 Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
Reporter: Radim Kolar
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: assembly-plugin-update.txt


 There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
 because its chmod do not understand nonstgandard linux parameters. Unless you 
 do mvn clean before every build it fails with:
 [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
 [WARNING] The following patterns were never triggered in this artifact 
 exclusion filter:
 o  'org.apache.ant:*:jar'
 o  'jdiff:jdiff:jar'
 [INFO] Copying files to 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
 [WARNING] ---
 [WARNING] Standard error:
 [WARNING] ---
 [WARNING] 
 [WARNING] ---
 [WARNING] Standard output:
 [WARNING] ---
 [WARNING] chmod: 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
  Inappropriate file type or format
 [WARNING] ---
 mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
 projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
 sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6801:


Attachment: HADOOP-6801.r4.diff

The javac warning was cause of the graceful config deprecation code added, 
which referred to deprecated keys a few times. I've suppressed that particular 
constructor to not warn for now in this new patch.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff, HADOOP-6801.r4.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6801:


Attachment: HADOOP-6801.r5.diff

The javac warnings further, were cause of a bad link to SequenceFile.Sorter 
(inner class)
{code}
[WARNING] 
/Users/harshchouraria/Work/code/apache/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:142:
 warning - Tag @link: reference not found: SequenceFile.Sorter{code}
In this new patch I've fixed the javac warns and javadoc warns both.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, HADOOP-6801.r5.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8316) Audit logging should be disabled by default

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273983#comment-13273983
 ] 

Hudson commented on HADOOP-8316:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
HADOOP-8316. Audit logging should be disabled by default. Contributed by 
Eli Collins (Revision 1337334)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337334
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/log4j.properties


 Audit logging should be disabled by default
 ---

 Key: HADOOP-8316
 URL: https://issues.apache.org/jira/browse/HADOOP-8316
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hadoop-8316.txt


 HADOOP-7633 made hdfs, mr and security audit logging on by default (INFO 
 level) in log4j.properties used for the packages, this then got copied over 
 to the non-packaging log4j.properties in HADOOP-8216 (which made them 
 consistent).
 Seems like we should keep with the v1.x setting which is disabled (WARNING 
 level) by default. There's a performance overhead to audit logging, and 
 HADOOP-7633 provided not rationale (just We should add the audit logs as 
 part of default confs) as to why they were enabled for the packages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273982#comment-13273982
 ] 

Hudson commented on HADOOP-8353:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
HADOOP-8353. hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop. 
Contributed by Roman Shaposhnik. (Revision 1337251)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/bin/yarn-daemon.sh


 hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
 -

 Key: HADOOP-8353
 URL: https://issues.apache.org/jira/browse/HADOOP-8353
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8353-2.patch.txt, HADOOP-8353.patch.txt


 The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
 There's a time delay between when the action is called and when the process 
 actually exists. This can be misleading to the callers of the *-daemon.sh 
 scripts since they expect stop action to return when process is actually 
 stopped.
 I suggest we augment the stop action with a time-delay check for the process 
 status and a SIGKILL once the delay has expired.
 I understand that sending SIGKILL is a measure of last resort and is 
 generally frowned upon among init.d script writers, but the excuse we have 
 for Hadoop is that it is engineered to be a fault tolerant system and thus 
 there's not danger of putting system into an incontinent state by a violent 
 SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
 a rare condition.
 Finally, there's always an option of an exponential back-off type of solution 
 if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273984#comment-13273984
 ] 

Hudson commented on HADOOP-8366:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
Move HADOOP-8285 and HADOOP-8366 to 2.0.0 in CHANGES.txt. (Revision 1337431)
HADOOP-8366 Use ProtoBuf for RpcResponseHeader (sanjay radia) (Revision 1337283)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337431
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337283
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Status.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcPayloadHeader.proto


 Use ProtoBuf for RpcResponseHeader
 --

 Key: HADOOP-8366
 URL: https://issues.apache.org/jira/browse/HADOOP-8366
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Blocker
 Fix For: 2.0.0

 Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
 hadoop-8366-3.patch, hadoop-8366-4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8285) Use ProtoBuf for RpcPayLoadHeader

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273985#comment-13273985
 ] 

Hudson commented on HADOOP-8285:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
Move HADOOP-8285 and HADOOP-8366 to 2.0.0 in CHANGES.txt. (Revision 1337431)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337431
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Use ProtoBuf for RpcPayLoadHeader
 -

 Key: HADOOP-8285
 URL: https://issues.apache.org/jira/browse/HADOOP-8285
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 2.0.0

 Attachments: hadoop-8285-1-common.patch, hadoop-8285-1.patch, 
 hadoop-8285-2-common.patch, hadoop-8285-2.patch, hadoop-8285-3-common.patch, 
 hadoop-8285-3.patch, hadoop-8285-4-common.patch, hadoop-8285-4.patch, 
 hadoop-8285-5-common.patch, hadoop-8285-5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8224) Don't hardcode hdfs.audit.logger in the scripts

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273986#comment-13273986
 ] 

Hudson commented on HADOOP-8224:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
HADOOP-8224. Don't hardcode hdfs.audit.logger in the scripts. Contributed 
by Tomohiko Kinebuchi (Revision 1337339)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337339
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh


 Don't hardcode hdfs.audit.logger in the scripts
 ---

 Key: HADOOP-8224
 URL: https://issues.apache.org/jira/browse/HADOOP-8224
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Tomohiko Kinebuchi
 Fix For: 2.0.0

 Attachments: HADOOP-8224.txt, HADOOP-8224.txt, hadoop-8224.txt


 The HADOOP_*OPTS defined for HDFS in hadoop-env.sh hard-code the 
 hdfs.audit.logger (is explicitly set via -Dhdfs.audit.logger=INFO,RFAAUDIT) 
 so it's not overridable. Let's allow someone to override it as we do the 
 other parameters by introducing HADOOP_AUDIT_LOGGER.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8113) Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too (not just MapReduce)

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273987#comment-13273987
 ] 

Hudson commented on HADOOP-8113:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
Add HADOOP-8113 to CHANGES.txt (Revision 1337415)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337415
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too (not just 
 MapReduce)
 ---

 Key: HADOOP-8113
 URL: https://issues.apache.org/jira/browse/HADOOP-8113
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.2
Reporter: Eugene Koontz
Assignee: Eugene Koontz
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HADOOP-8113.patch


 Currently BUILDING.txt states: 
 {quote}
   ProtocolBuffer 2.4.1+ (for MapReduce)
 {quote}
 But HDFS needs ProtocolBuffer too: 
 {code}
 hadoop-common/hadoop-hdfs-project$ find . -name *.proto | wc -l
   11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273989#comment-13273989
 ] 

Hudson commented on HADOOP-8395:


Integrated in Hadoop-Hdfs-trunk #1041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1041/])
HADOOP-8395. Text shell command unnecessarily demands that a SequenceFile's 
key class be WritableComparable (harsh) (Revision 1337449)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337449
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273992#comment-13273992
 ] 

Hadoop QA commented on HADOOP-6801:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526631/HADOOP-6801.r4.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 8 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/990//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/990//console

This message is automatically generated.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, HADOOP-6801.r5.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2012-05-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13273999#comment-13273999
 ] 

Hadoop QA commented on HADOOP-6801:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526633/HADOOP-6801.r5.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/991//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/991//console

This message is automatically generated.

 io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
 still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
 ---

 Key: HADOOP-6801
 URL: https://issues.apache.org/jira/browse/HADOOP-6801
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Erik Steffl
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff, 
 HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, HADOOP-6801.r5.diff


 Following configuration keys in CommonConfigurationKeysPublic.java (former 
 CommonConfigurationKeys.java):
 public static final String  IO_SORT_MB_KEY = io.sort.mb;
 public static final String  IO_SORT_FACTOR_KEY = io.sort.factor;
 are partially moved:
   - they were renamed to mapreduce.task.io.sort.mb and 
 mapreduce.task.io.sort.factor respectively
   - they were moved to mapreduce project, documented in mapred-default.xml
 However:
   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
 above
   - strings io.sort.mb and io.sort.factor are used in SequenceFile.java 
 in Hadoop Common project
 Not sure what the solution is, these constants should probably be removed 
 from CommonConfigurationKeysPublic.java but I am not sure what's the best 
 solution for SequenceFile.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274004#comment-13274004
 ] 

Hudson commented on HADOOP-8353:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
HADOOP-8353. hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop. 
Contributed by Roman Shaposhnik. (Revision 1337251)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/bin/yarn-daemon.sh


 hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
 -

 Key: HADOOP-8353
 URL: https://issues.apache.org/jira/browse/HADOOP-8353
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8353-2.patch.txt, HADOOP-8353.patch.txt


 The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
 There's a time delay between when the action is called and when the process 
 actually exists. This can be misleading to the callers of the *-daemon.sh 
 scripts since they expect stop action to return when process is actually 
 stopped.
 I suggest we augment the stop action with a time-delay check for the process 
 status and a SIGKILL once the delay has expired.
 I understand that sending SIGKILL is a measure of last resort and is 
 generally frowned upon among init.d script writers, but the excuse we have 
 for Hadoop is that it is engineered to be a fault tolerant system and thus 
 there's not danger of putting system into an incontinent state by a violent 
 SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
 a rare condition.
 Finally, there's always an option of an exponential back-off type of solution 
 if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8316) Audit logging should be disabled by default

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274005#comment-13274005
 ] 

Hudson commented on HADOOP-8316:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
HADOOP-8316. Audit logging should be disabled by default. Contributed by 
Eli Collins (Revision 1337334)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337334
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/log4j.properties


 Audit logging should be disabled by default
 ---

 Key: HADOOP-8316
 URL: https://issues.apache.org/jira/browse/HADOOP-8316
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hadoop-8316.txt


 HADOOP-7633 made hdfs, mr and security audit logging on by default (INFO 
 level) in log4j.properties used for the packages, this then got copied over 
 to the non-packaging log4j.properties in HADOOP-8216 (which made them 
 consistent).
 Seems like we should keep with the v1.x setting which is disabled (WARNING 
 level) by default. There's a performance overhead to audit logging, and 
 HADOOP-7633 provided not rationale (just We should add the audit logs as 
 part of default confs) as to why they were enabled for the packages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274006#comment-13274006
 ] 

Hudson commented on HADOOP-8366:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
Move HADOOP-8285 and HADOOP-8366 to 2.0.0 in CHANGES.txt. (Revision 1337431)
HADOOP-8366 Use ProtoBuf for RpcResponseHeader (sanjay radia) (Revision 1337283)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337431
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337283
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Status.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcPayloadHeader.proto


 Use ProtoBuf for RpcResponseHeader
 --

 Key: HADOOP-8366
 URL: https://issues.apache.org/jira/browse/HADOOP-8366
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Blocker
 Fix For: 2.0.0

 Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
 hadoop-8366-3.patch, hadoop-8366-4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8285) Use ProtoBuf for RpcPayLoadHeader

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274007#comment-13274007
 ] 

Hudson commented on HADOOP-8285:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
Move HADOOP-8285 and HADOOP-8366 to 2.0.0 in CHANGES.txt. (Revision 1337431)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337431
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Use ProtoBuf for RpcPayLoadHeader
 -

 Key: HADOOP-8285
 URL: https://issues.apache.org/jira/browse/HADOOP-8285
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 2.0.0

 Attachments: hadoop-8285-1-common.patch, hadoop-8285-1.patch, 
 hadoop-8285-2-common.patch, hadoop-8285-2.patch, hadoop-8285-3-common.patch, 
 hadoop-8285-3.patch, hadoop-8285-4-common.patch, hadoop-8285-4.patch, 
 hadoop-8285-5-common.patch, hadoop-8285-5.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8224) Don't hardcode hdfs.audit.logger in the scripts

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274008#comment-13274008
 ] 

Hudson commented on HADOOP-8224:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
HADOOP-8224. Don't hardcode hdfs.audit.logger in the scripts. Contributed 
by Tomohiko Kinebuchi (Revision 1337339)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337339
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh


 Don't hardcode hdfs.audit.logger in the scripts
 ---

 Key: HADOOP-8224
 URL: https://issues.apache.org/jira/browse/HADOOP-8224
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Tomohiko Kinebuchi
 Fix For: 2.0.0

 Attachments: HADOOP-8224.txt, HADOOP-8224.txt, hadoop-8224.txt


 The HADOOP_*OPTS defined for HDFS in hadoop-env.sh hard-code the 
 hdfs.audit.logger (is explicitly set via -Dhdfs.audit.logger=INFO,RFAAUDIT) 
 so it's not overridable. Let's allow someone to override it as we do the 
 other parameters by introducing HADOOP_AUDIT_LOGGER.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8113) Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too (not just MapReduce)

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274009#comment-13274009
 ] 

Hudson commented on HADOOP-8113:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
Add HADOOP-8113 to CHANGES.txt (Revision 1337415)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337415
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too (not just 
 MapReduce)
 ---

 Key: HADOOP-8113
 URL: https://issues.apache.org/jira/browse/HADOOP-8113
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.2
Reporter: Eugene Koontz
Assignee: Eugene Koontz
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HADOOP-8113.patch


 Currently BUILDING.txt states: 
 {quote}
   ProtocolBuffer 2.4.1+ (for MapReduce)
 {quote}
 But HDFS needs ProtocolBuffer too: 
 {code}
 hadoop-common/hadoop-hdfs-project$ find . -name *.proto | wc -l
   11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8395) Text shell command unnecessarily demands that a SequenceFile's key class be WritableComparable

2012-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274011#comment-13274011
 ] 

Hudson commented on HADOOP-8395:


Integrated in Hadoop-Mapreduce-trunk #1077 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1077/])
HADOOP-8395. Text shell command unnecessarily demands that a SequenceFile's 
key class be WritableComparable (harsh) (Revision 1337449)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1337449
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java


 Text shell command unnecessarily demands that a SequenceFile's key class be 
 WritableComparable
 --

 Key: HADOOP-8395
 URL: https://issues.apache.org/jira/browse/HADOOP-8395
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
  Labels: shell
 Fix For: 3.0.0

 Attachments: HADOOP-8395.patch


 Text from Display set of Shell commands (hadoop fs -text), has a strict 
 subclass check for a sequence-file-header loaded key class to be a subclass 
 of WritableComparable.
 The sequence file writer itself has no such checks (one can create sequence 
 files with just plain writable keys, comparable is needed for sequence file's 
 sorter alone, which not all of them use always), and hence its not reasonable 
 for Text command to carry it either.
 We should relax the check and simply just check for Writable, not 
 WritableComparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8276) Auto-HA: add config for java options to pass to zkfc daemon

2012-05-12 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8276.
-

   Resolution: Fixed
Fix Version/s: Auto Failover (HDFS-3042)
 Hadoop Flags: Reviewed

 Auto-HA: add config for java options to pass to zkfc daemon
 ---

 Key: HADOOP-8276
 URL: https://issues.apache.org/jira/browse/HADOOP-8276
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: Auto Failover (HDFS-3042)

 Attachments: hadoop-8276.txt


 Currently the zkfc daemon is started without any ability to specify java 
 options for it. We should be add a flag so heap size, etc can be specified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8397) NPE thrown when IPC layer gets an EOF reading a response

2012-05-12 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8397:
---

 Summary: NPE thrown when IPC layer gets an EOF reading a response
 Key: HADOOP-8397
 URL: https://issues.apache.org/jira/browse/HADOOP-8397
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Priority: Critical


When making a call on an IPC connection where the other end has shut down, I 
see the following exception:
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:852)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:781)
from the lines:
{code}
RpcResponseHeaderProto response = 
RpcResponseHeaderProto.parseDelimitedFrom(in);
int callId = response.getCallId();
{code}
This is because parseDelimitedFrom() returns null in the case that the next 
thing to be read on the stream is an EOF.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8397) NPE thrown when IPC layer gets an EOF reading a response

2012-05-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13274110#comment-13274110
 ] 

Todd Lipcon commented on HADOOP-8397:
-

I think we just need to check for a null result, and throw EOFException in that 
case.

 NPE thrown when IPC layer gets an EOF reading a response
 

 Key: HADOOP-8397
 URL: https://issues.apache.org/jira/browse/HADOOP-8397
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Priority: Critical

 When making a call on an IPC connection where the other end has shut down, I 
 see the following exception:
 Caused by: java.lang.NullPointerException
 at 
 org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:852)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:781)
 from the lines:
 {code}
 RpcResponseHeaderProto response = 
 RpcResponseHeaderProto.parseDelimitedFrom(in);
 int callId = response.getCallId();
 {code}
 This is because parseDelimitedFrom() returns null in the case that the next 
 thing to be read on the stream is an EOF.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira