Hadoop-Hdfs-0.23-Build - Build # 489 - Still Failing

2013-01-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/489/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 23574 lines...]
 [exec] 
 [exec] BUILD FAILED
 [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error 
building site.
 [exec] 
 [exec] There appears to be a problem with your site build.
 [exec] 
 [exec] Read the output above:
 [exec] * Cocoon will report the status of each document:
 [exec] - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
 [exec] * Even if only one link is broken, you will still get failed.
 [exec] * Your site would still be generated, but some pages would be 
broken.
 [exec]   - See 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
 [exec] 
 [exec] Total time: 19 seconds
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [1:59.629s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:00.710s
[INFO] Finished at: Wed Jan 09 11:35:53 UTC 2013
[INFO] Final Memory: 37M/517M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
ERROR: Publisher hudson.tasks.JavadocArchiver aborted due to exception
java.lang.IllegalStateException: basedir 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/site/api
 does not exist.
at org.apache.tools.ant.DirectoryScanner.scan(DirectoryScanner.java:879)
at hudson.FilePath$37.hasMatch(FilePath.java:2109)
at hudson.FilePath$37.invoke(FilePath.java:2006)
at hudson.FilePath$37.invoke(FilePath.java:1996)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2309)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testDirectoryScanner

Error Message:
IPC server unable to read call parameters: readObject can't find class 
org.apache.hadoop.io.Writable

Stack Trace:
java.lang.RuntimeException: IPC server unable to read call parameters: 
readObject can't find class org.apache.hadoop.io.Writable
at org.apache.hadoop.ipc.Client.call(Client.java:1088)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:195)
at $Proxy11.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 

Hadoop-Hdfs-trunk - Build # 1280 - Still Failing

2013-01-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10502 lines...]
  
testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread
  
testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): 
Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Failed on local exception: java.io.IOException: Couldn't set up IO streams; 
Host Details : local host is: asf005.sp2.ygridcore.net/67.195.138.27; 
destination host is: localhost:46057; 
  
testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread
  
testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread
  
testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread
  
testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract):
 Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread
  
testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): 
Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create 
new native thread

Tests run: 1024, Failures: 0, Errors: 18, Skipped: 5

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:38:24.071s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 

Build failed in Jenkins: Hadoop-Hdfs-trunk #1280

2013-01-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/changes

Changes:

[vinodkv] MAPREDUCE-4810. Added new admin command options for MR AM. 
Contributed by Jerry Chen.

[acmurthy] MAPREDUCE-4520. Added support for MapReduce applications to request 
for CPU cores along-with memory post YARN-2. Contributed by Arun C. Murthy.

[acmurthy] YARN-2. Enhanced CapacityScheduler to account for CPU alongwith 
memory for multi-dimensional resource scheduling. Contributed by Arun C. Murthy.

[szetszwo] svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate 
arguments to BlockReaderFactory in a class

[szetszwo] svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate 
connections to peers in Peer and PeerServer classes

[eli] HDFS-4035. LightWeightGSet and LightWeightHashSet increment a volatile 
without synchronization. Contributed by Eli Collins

[eli] HDFS-4034. Remove redundant null checks. Contributed by Eli Collins

[eli] Updated CHANGES.txt to add HDFS-4033.

[eli] HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins

[todd] HDFS-4353. Encapsulate connections to peers in Peer and PeerServer 
classes. Contributed by Colin Patrick McCabe.

[eli] HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 
exclusions. Contributed by Eli Collins

[eli] HDFS-4030. BlockManager excessBlocksCount and 
postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli 
Collins

[suresh] HADOOP-9119. Add test to FileSystemContractBaseTest to verify 
integrity of overwritten files. Contributed by Steve Loughran.

[tomwhite] MAPREDUCE-4278. Cannot run two local jobs in parallel from the same 
gateway. Contributed by Sandy Ryza.

[vinodkv] YARN-253. Fixed container-launch to not fail when there are no local 
resources to localize. Contributed by Tom White.

--
[...truncated 10309 lines...]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)
  Time elapsed: 14 sec   ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, 
message=unable to create new native thread
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:464)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingFile(FileSystemContractBaseTest.java:410)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at 

[jira] [Created] (HDFS-4372) Track NameNode startup progress

2013-01-09 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4372:
---

 Summary: Track NameNode startup progress
 Key: HDFS-4372
 URL: https://issues.apache.org/jira/browse/HDFS-4372
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Track detailed progress information about the steps of NameNode startup to 
enable display to users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4373) Add HTTP API for querying NameNode startup progress

2013-01-09 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4373:
---

 Summary: Add HTTP API for querying NameNode startup progress
 Key: HDFS-4373
 URL: https://issues.apache.org/jira/browse/HDFS-4373
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Provide an HTTP API for non-browser clients to query the NameNode's current 
progress through startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4244) Support deleting snapshots

2013-01-09 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4244.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

I have committed this.  Thank Jing!

 Support deleting snapshots
 --

 Key: HDFS-4244
 URL: https://issues.apache.org/jira/browse/HDFS-4244
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
 HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, 
 HDFS-4244.006.patch, HDFS-4244.007.patch


 Provide functionality to delete a snapshot, given the name of the snapshot 
 and the path to the directory where the snapshot was taken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4375) Use token request messages defined in hadoop common

2013-01-09 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4375:
-

 Summary: Use token request messages defined in hadoop common
 Key: HDFS-4375
 URL: https://issues.apache.org/jira/browse/HDFS-4375
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, security
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas


HDFS changes related to HADOOP-9192 to reuse the protobuf messages defined in 
common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4376) Intermittent timeout of TestBalancerWithNodeGroup

2013-01-09 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-4376:


 Summary: Intermittent timeout of TestBalancerWithNodeGroup
 Key: HDFS-4376
 URL: https://issues.apache.org/jira/browse/HDFS-4376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer, test
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers
Priority: Minor
 Attachments: test-balancer-with-node-group-timeout.txt

HDFS-4261 fixed several issues with the balancer and balancer tests, and 
reduced the frequency with which TestBalancerWithNodeGroup times out. Despite 
this, occasional timeouts still occur in this test. This JIRA is to track and 
fix this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4377) Some trivial DN comment cleanup

2013-01-09 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4377:
-

 Summary: Some trivial DN comment cleanup
 Key: HDFS-4377
 URL: https://issues.apache.org/jira/browse/HDFS-4377
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: hdfs-4377.txt

DataStorage.java
- The initilized member is misspelled
- Comment what the storageID member is

DataNode.java
- Cleanup createNewStorageId comment (should mention the port is included and 
is overly verbose)

BlockManager.java
- TreeSet in the comment should be TreeMap


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4378) Create a StorageID class

2013-01-09 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4378:
-

 Summary: Create a StorageID class
 Key: HDFS-4378
 URL: https://issues.apache.org/jira/browse/HDFS-4378
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


We currently pass DataNode storage IDs around as strings, the code would be 
more readable (eg map keys could be specified as StorageIDs rather than 
strings) and less error prone if we used a simple class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4379) DN block reports should include a sequence number

2013-01-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-4379:
-

 Summary: DN block reports should include a sequence number
 Key: HDFS-4379
 URL: https://issues.apache.org/jira/browse/HDFS-4379
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp


Block reports should include a monotonically increasing sequence number.  If 
the sequence starts from zero, this will aid the NN in being able to 
distinguish a DN restart (seqNum == 0) versus a re-registration after network 
interruption (seqNum != 0).  The NN may also use it to identify and skip 
already processed block reports.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4380) Opening a file for read before writer writes a block causes NPE

2013-01-09 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-4380:
-

 Summary: Opening a file for read before writer writes a block 
causes NPE
 Key: HDFS-4380
 URL: https://issues.apache.org/jira/browse/HDFS-4380
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Todd Lipcon


JD Cryans found this issue: it seems like, if you open a file for read 
immediately after it's been created by the writer, after a block has been 
allocated, but before the block is created on the DNs, then you can end up with 
the following NPE:

java.lang.NullPointerException
   at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.updateBlockInfo(DFSClient.java:1885)
   at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1858)
   at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.init(DFSClient.java:1834)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578)
   at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154)

This seems to be because {{getBlockInfo}} returns a null block when the DN 
doesn't yet have the replica. The client should probably either fall back to a 
different replica or treat it as zero-length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira