Hi Johannes, If you use the newly added feature to Pig (revision 630355) that allows the unit test to use the mini cluster, then stuff like ~/pigtest/conf and hadoop.log.dir are already taken care of in the build.xml file... Unless you don't use build.xml in your development environment:).
Best regards, Xu > From: Johannes Zillmann <[EMAIL PROTECTED]> > Reply-To: <[email protected]> > Date: Sun, 24 Feb 2008 17:35:41 -0800 > To: <[email protected]> > Subject: Re: running MiniCluster test from eclipse > > Ok, > > got it running. > Had to add ~/pigtest/conf to classpath and set hadoop.log.dir as system > property! > > Johannes > > Johannes Zillmann wrote: >> Hi there, >> >> i've tried to run one of the test which uses the MiniCluster but >> they're failing with following exceptions: >> 2008-02-23 21:25:46,531 ERROR dfs.DataNode (DataNode.java:run(813)) - >> DataXceiver: java.io.IOException: Block blk_861876555015614032 is >> valid, and cannot be written to. >> at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:551) >> at >> org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1257) >> at >> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:901) >> at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:804) >> at java.lang.Thread.run(Thread.java:613) >> >> 2008-02-23 21:25:46,531 WARN dfs.DataNode (DataNode.java:run(1467)) - >> Failed to transfer blk_861876555015614032 to 127.0.0.1:50010 got >> java.net.SocketException: Connection reset >> at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96) >> at java.net.SocketOutputStream.write(SocketOutputStream.java:136) >> at >> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) >> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) >> at java.io.DataOutputStream.write(DataOutputStream.java:90) >> at >> org.apache.hadoop.dfs.DataNode$BlockSender.sendChunk(DataNode.java:1175) >> at >> org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:1208) >> at org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:1460) >> at java.lang.Thread.run(Thread.java:613) >> >> 2008-02-23 21:25:46,848 INFO dfs.DataNode >> (DataNode.java:transferBlocks(690)) - Starting thread to transfer >> block blk_861876555015614032 to >> [Lorg.apache.hadoop.dfs.DatanodeInfo;@38933d >> 2008-02-23 21:25:46,850 ERROR dfs.DataNode (DataNode.java:run(813)) - >> DataXceiver: java.io.IOException: Block blk_861876555015614032 is >> valid, and cannot be written to. >> at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:551) >> at >> org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1257) >> at >> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:901) >> at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:804) >> at java.lang.Thread.run(Thread.java:613) >> >> Running the tests with ant works fine! >> >> Any ideas? Or has anybody run the test from eclipse successfully ? >> regards >> Johannes >> > > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 101tec GmbH > > Halle (Saale), Saxony-Anhalt, Germany > http://www.101tec.com >
