Re: issues with hadoop in AIX

2009-01-05 Thread Steve Loughran

Allen Wittenauer wrote:

On 12/27/08 12:18 AM, Arun Venugopal arunvenugopa...@gmail.com wrote:

Yes, I was able to run this on AIX as well with a minor change to the
DF.java code. But this was more of a proof of concept than on a
production system.


There are lots of places where Hadoop (esp. in contrib) interprets the
output of Unix command line utilities. Changes like this are likely going to
be required for AIX and other Unix systems that aren't being used by a
committer. :(



that aren't being used in the test process, equally importantly


I think hudson runs on Solaris, doesn't it?


Re: issues with hadoop in AIX

2008-12-29 Thread ps40
 02:36:57 WARN dfs.DFSClient: Error Recovery for block
 null bad
 datanode[0]*

 *$*



 This is the only error I am seeing in the log files. My first  
 doubt
 was if
 the issue was due to IBM JDK. To verify this I tried to run Hadoop
 using
 IBM
 JDK in Linux (FC 9) machine and it worked perfectly.

 It would be a great help if someone could give me some pointers on
 what I
 can try to solve / debug this error.
 Thanks,
 Arun



 -- 
 View this message in context:
 http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
 Sent from the Hadoop core-user mailing list archive at Nabble.com.




 -- 
 View this message in context:
 http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
 Sent from the Hadoop core-user mailing list archive at Nabble.com.

 
 
 

-- 
View this message in context: 
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21209012.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: issues with hadoop in AIX

2008-12-27 Thread Allen Wittenauer
On 12/27/08 12:18 AM, Arun Venugopal arunvenugopa...@gmail.com wrote:
 Yes, I was able to run this on AIX as well with a minor change to the
 DF.java code. But this was more of a proof of concept than on a
 production system.

There are lots of places where Hadoop (esp. in contrib) interprets the
output of Unix command line utilities. Changes like this are likely going to
be required for AIX and other Unix systems that aren't being used by a
committer. :(




Re: issues with hadoop in AIX

2008-12-26 Thread Brian Bockelman


On Dec 25, 2008, at 11:14 AM, ps40 wrote:



Thanks for replying. Are you guys using Hadoop on Solaris in a  
production

environment?



Yes!  It's worked well so far (we switched our first Solaris server  
onto Hadoop about 2 weeks ago, so it's not like we have a plethora of  
experience).  We're running it on a Sun Thumper on a large ZFS RAID  
pool. We are discussing turning on a second Thumper with a non-RAID  
configuration, and will be observing the difference.


It's all Sun Java, so we don't really notice the difference.   I don't  
have any experience using the IBM JDK, so AIX might be a tougher cookie.


Brian



Brian Bockelman wrote:


Hey,

I can attest that Hadoop works on Solaris 10 just fine.

Brian

On Dec 24, 2008, at 10:26 AM, ps40 wrote:



Hi,

I saw that a fix was created for this issue. Were you able to run
hadoop on
AIX after this? We are in a similar situation and are wondering if
hadoop
will work on AIX and Solaris.

Thanks


Arun Venugopal-2 wrote:


Hi,

I am evaluating Hadoop's Portability Across Heterogeneous Hardware
and
Software Platforms. For this I am trying to setup a grid (Hadoop
0.17)
having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
was able
to
setup a grid with 10 Linux machines and run some basic grid jobs on
it.  I
was also able to setup and start off a standalone grid in an AIX
machine.
But when I try to copy data into this (AIX) grid,  I get the
following
error
–


*$ /opt/hadoop/bin/hadoop dfs -ls*

*ls: Cannot access .: No such file or directory.*

*$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
option_100_data.csv*

*08/08/13 02:36:50 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0  
nodes,

instead of 1*

*at
org
.apache
.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

*at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
300)*

*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)*

*at
sun
.reflect
.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

*at
sun
.reflect
.DelegatingMethodAccessorImpl
.invoke(DelegatingMethodAccessorImpl.java:43)
*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

*at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
896)*

* *

*at org.apache.hadoop.ipc.Client.call(Client.java:557)*

*at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)*

*at
sun
.reflect
.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

*at
sun
.reflect
.DelegatingMethodAccessorImpl
.invoke(DelegatingMethodAccessorImpl.java:43)
*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at
org
.apache
.hadoop
.io
.retry
.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java: 
82)

*

*at
org
.apache
.hadoop
.io 
.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:

59)
*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at
org.apache.hadoop.dfs.DFSClient
$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
*

*at
org.apache.hadoop.dfs.DFSClient
$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
$1700(DFSClient.java:1702)
*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream
$DataStreamer.run(DFSClient.java:1842)
*

*
*

*08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
sleeping
/user/hadoop/option_100_data.csv retries left 4*

*08/08/13 02:36:51 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0  
nodes,

instead of 1*

*at
org
.apache
.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

* *

**

* *

*08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
null bad
datanode[0]*

*$*



This is the only error I am seeing in the log files. My first doubt
was if
the issue was due to IBM JDK. To verify this I tried to run Hadoop
using
IBM
JDK in Linux (FC 9) machine and it worked perfectly.

It would be a great help if someone could give me some pointers on
what I
can try to solve / debug this error.
Thanks,
Arun




--
View this message in context:
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.






--
View this message in context: 
http://www.nabble.com

Re: issues with hadoop in AIX

2008-12-26 Thread Arun Venugopal
 this error.
Thanks,
Arun




--
View this message in context:
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.






--
View this message in context: 
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.






Re: issues with hadoop in AIX

2008-12-25 Thread ps40

Thanks for replying. Are you guys using Hadoop on Solaris in a production
environment?



Brian Bockelman wrote:
 
 Hey,
 
 I can attest that Hadoop works on Solaris 10 just fine.
 
 Brian
 
 On Dec 24, 2008, at 10:26 AM, ps40 wrote:
 

 Hi,

 I saw that a fix was created for this issue. Were you able to run  
 hadoop on
 AIX after this? We are in a similar situation and are wondering if  
 hadoop
 will work on AIX and Solaris.

 Thanks


 Arun Venugopal-2 wrote:

 Hi,

 I am evaluating Hadoop's Portability Across Heterogeneous Hardware  
 and
 Software Platforms. For this I am trying to setup a grid (Hadoop  
 0.17)
 having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I  
 was able
 to
 setup a grid with 10 Linux machines and run some basic grid jobs on  
 it.  I
 was also able to setup and start off a standalone grid in an AIX  
 machine.
 But when I try to copy data into this (AIX) grid,  I get the  
 following
 error
 –


 *$ /opt/hadoop/bin/hadoop dfs -ls*

 *ls: Cannot access .: No such file or directory.*

 *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
 option_100_data.csv*

 *08/08/13 02:36:50 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
 instead of 1*

 *at
 org 
 .apache 
 .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
 *

 *at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java: 
 300)*

 *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native  
 Method)*

 *at
 sun 
 .reflect 
 .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
 *

 *at
 sun 
 .reflect 
 .DelegatingMethodAccessorImpl 
 .invoke(DelegatingMethodAccessorImpl.java:43)
 *

 *at java.lang.reflect.Method.invoke(Method.java:615)*

 *at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

 *at org.apache.hadoop.ipc.Server$Handler.run(Server.java: 
 896)*

 * *

 *at org.apache.hadoop.ipc.Client.call(Client.java:557)*

 *at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

 *at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

 *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native  
 Method)*

 *at
 sun 
 .reflect 
 .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
 *

 *at
 sun 
 .reflect 
 .DelegatingMethodAccessorImpl 
 .invoke(DelegatingMethodAccessorImpl.java:43)
 *

 *at java.lang.reflect.Method.invoke(Method.java:615)*

 *at
 org 
 .apache 
 .hadoop 
 .io 
 .retry 
 .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
 *

 *at
 org 
 .apache 
 .hadoop 
 .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java: 
 59)
 *

 *at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

 *at
 org.apache.hadoop.dfs.DFSClient 
 $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
 *

 *at
 org.apache.hadoop.dfs.DFSClient 
 $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
 *

 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access 
 $1700(DFSClient.java:1702)
 *

 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream 
 $DataStreamer.run(DFSClient.java:1842)
 *

 *
 *

 *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException  
 sleeping
 /user/hadoop/option_100_data.csv retries left 4*

 *08/08/13 02:36:51 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
 instead of 1*

 *at
 org 
 .apache 
 .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
 *

 * *

 **

 * *

 *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block  
 null bad
 datanode[0]*

 *$*



 This is the only error I am seeing in the log files. My first doubt  
 was if
 the issue was due to IBM JDK. To verify this I tried to run Hadoop  
 using
 IBM
 JDK in Linux (FC 9) machine and it worked perfectly.

 It would be a great help if someone could give me some pointers on  
 what I
 can try to solve / debug this error.
 Thanks,
 Arun



 -- 
 View this message in context:
 http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
 Sent from the Hadoop core-user mailing list archive at Nabble.com.
 
 
 

-- 
View this message in context: 
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: issues with hadoop in AIX

2008-12-24 Thread ps40

Hi,

I saw that a fix was created for this issue. Were you able to run hadoop on
AIX after this? We are in a similar situation and are wondering if hadoop
will work on AIX and Solaris.

Thanks


Arun Venugopal-2 wrote:
 
 Hi,
 
 I am evaluating Hadoop's Portability Across Heterogeneous Hardware and
 Software Platforms. For this I am trying to setup a grid (Hadoop 0.17)
 having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I was able
 to
 setup a grid with 10 Linux machines and run some basic grid jobs on it.  I
 was also able to setup and start off a standalone grid in an AIX machine.
 But when I try to copy data into this (AIX) grid,  I get the following
 error
 –
 
 
 *$ /opt/hadoop/bin/hadoop dfs -ls*
 
 *ls: Cannot access .: No such file or directory.*
 
 *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
 option_100_data.csv*
 
 *08/08/13 02:36:50 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
 instead of 1*
 
 *at
 org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
 *
 
 *at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)*
 
 *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
 
 *at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
 *
 
 *at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 *
 
 *at java.lang.reflect.Method.invoke(Method.java:615)*
 
 *at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*
 
 *at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)*
 
 * *
 
 *at org.apache.hadoop.ipc.Client.call(Client.java:557)*
 
 *at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*
 
 *at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*
 
 *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
 
 *at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
 *
 
 *at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 *
 
 *at java.lang.reflect.Method.invoke(Method.java:615)*
 
 *at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
 *
 
 *at
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
 *
 
 *at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*
 
 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
 *
 
 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
 *
 
 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
 *
 
 *at
 org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
 *
 
 *
 *
 
 *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException sleeping
 /user/hadoop/option_100_data.csv retries left 4*
 
 *08/08/13 02:36:51 INFO dfs.DFSClient:
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
 /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
 instead of 1*
 
 *at
 org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
 *
 
 * *
 
 **
 
 * *
 
 *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block null bad
 datanode[0]*
 
 *$*
 
 
 
 This is the only error I am seeing in the log files. My first doubt was if
 the issue was due to IBM JDK. To verify this I tried to run Hadoop using
 IBM
 JDK in Linux (FC 9) machine and it worked perfectly.
 
 It would be a great help if someone could give me some pointers on what I
 can try to solve / debug this error.
 Thanks,
 Arun
 
 

-- 
View this message in context: 
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: issues with hadoop in AIX

2008-12-24 Thread Brian Bockelman

Hey,

I can attest that Hadoop works on Solaris 10 just fine.

Brian

On Dec 24, 2008, at 10:26 AM, ps40 wrote:



Hi,

I saw that a fix was created for this issue. Were you able to run  
hadoop on
AIX after this? We are in a similar situation and are wondering if  
hadoop

will work on AIX and Solaris.

Thanks


Arun Venugopal-2 wrote:


Hi,

I am evaluating Hadoop's Portability Across Heterogeneous Hardware  
and
Software Platforms. For this I am trying to setup a grid (Hadoop  
0.17)
having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I  
was able

to
setup a grid with 10 Linux machines and run some basic grid jobs on  
it.  I
was also able to setup and start off a standalone grid in an AIX  
machine.
But when I try to copy data into this (AIX) grid,  I get the  
following

error
–


*$ /opt/hadoop/bin/hadoop dfs -ls*

*ls: Cannot access .: No such file or directory.*

*$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
option_100_data.csv*

*08/08/13 02:36:50 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

*at
org 
.apache 
.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)

*

*at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java: 
300)*


*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native  
Method)*


*at
sun 
.reflect 
.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)

*

*at
sun 
.reflect 
.DelegatingMethodAccessorImpl 
.invoke(DelegatingMethodAccessorImpl.java:43)

*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

*at org.apache.hadoop.ipc.Server$Handler.run(Server.java: 
896)*


* *

*at org.apache.hadoop.ipc.Client.call(Client.java:557)*

*at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native  
Method)*


*at
sun 
.reflect 
.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)

*

*at
sun 
.reflect 
.DelegatingMethodAccessorImpl 
.invoke(DelegatingMethodAccessorImpl.java:43)

*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at
org 
.apache 
.hadoop 
.io 
.retry 
.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

*

*at
org 
.apache 
.hadoop 
.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java: 
59)

*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at
org.apache.hadoop.dfs.DFSClient 
$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)

*

*at
org.apache.hadoop.dfs.DFSClient 
$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)

*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access 
$1700(DFSClient.java:1702)

*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream 
$DataStreamer.run(DFSClient.java:1842)

*

*
*

*08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException  
sleeping

/user/hadoop/option_100_data.csv retries left 4*

*08/08/13 02:36:51 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

*at
org 
.apache 
.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)

*

* *

**

* *

*08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block  
null bad

datanode[0]*

*$*



This is the only error I am seeing in the log files. My first doubt  
was if
the issue was due to IBM JDK. To verify this I tried to run Hadoop  
using

IBM
JDK in Linux (FC 9) machine and it worked perfectly.

It would be a great help if someone could give me some pointers on  
what I

can try to solve / debug this error.
Thanks,
Arun




--
View this message in context: 
http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.




issues with hadoop in AIX

2008-08-13 Thread Arun Venugopal
Hi,

I am evaluating Hadoop's Portability Across Heterogeneous Hardware and
Software Platforms. For this I am trying to setup a grid (Hadoop 0.17)
having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I was able to
setup a grid with 10 Linux machines and run some basic grid jobs on it.  I
was also able to setup and start off a standalone grid in an AIX machine.
But when I try to copy data into this (AIX) grid,  I get the following error
–


*$ /opt/hadoop/bin/hadoop dfs -ls*

*ls: Cannot access .: No such file or directory.*

*$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
option_100_data.csv*

*08/08/13 02:36:50 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

*at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

*at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)*

*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

*at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

*at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

*at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)*

* *

*at org.apache.hadoop.ipc.Client.call(Client.java:557)*

*at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

*at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

*at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
*

*at java.lang.reflect.Method.invoke(Method.java:615)*

*at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
*

*at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
*

*at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
*

*at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
*

*
*

*08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException sleeping
/user/hadoop/option_100_data.csv retries left 4*

*08/08/13 02:36:51 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

*at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

* *

**

* *

*08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block null bad
datanode[0]*

*$*



This is the only error I am seeing in the log files. My first doubt was if
the issue was due to IBM JDK. To verify this I tried to run Hadoop using IBM
JDK in Linux (FC 9) machine and it worked perfectly.

It would be a great help if someone could give me some pointers on what I
can try to solve / debug this error.
Thanks,
Arun