Re: Upgrading namenode/secondary node hardware

2011-06-17 Thread Steve Loughran
On 16/06/11 14:19, MilleBii wrote: But if my Filesystem is up running fine... do I have to worry at all or will the copy (ftp transfer) of hdfs will be enough. I'm not going to make any predictions there as if/when things go wrong -you do need to shut down the FS before the move -you

Name node problems

2011-06-17 Thread Віталій Тимчишин
Hello. My environment is: HDFS 0.21, NameNode + BackupNode. After some time Backup node crashed with an exception (stack trace below). Problem #1 - the process did not exit. I've tried to run Secondary node to perform checkout. Got similar crash, but it did exit. Backed up my data and restarted

Re: Upgrading namenode/secondary node hardware

2011-06-17 Thread MilleBii
I see it is not so obvious and potentially dangerous so I will be learning experimenting first. Thx for the tip. 2011/6/17 Steve Loughran ste...@apache.org On 16/06/11 14:19, MilleBii wrote: But if my Filesystem is up running fine... do I have to worry at all or will the copy (ftp

Data node check dir storm

2011-06-17 Thread Vitalii Tymchyshyn
Hello. I can see that if data node receives some IO error, this can cause checkDir storm. What I mean: 1) any error produces DataNode.checkDiskError call 2) this call locks volume: java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)

Fw: HDFS File Appending URGENT

2011-06-17 Thread jagaran das
Please help me on this. I need it very urgently Regards, Jagaran - Forwarded Message From: jagaran das jagaran_...@yahoo.co.in To: common-user@hadoop.apache.org Sent: Thu, 16 June, 2011 9:51:51 PM Subject: Re: HDFS File Appending URGENT Thanks a lot Xiabo. I have tried with the

Re: HDFS File Appending URGENT

2011-06-17 Thread Tsz Wo (Nicholas), Sze
Hi Jagaran, Short answer: the append feature is not in any release. In this sense, it is not stable. Below are more details on the Append feature status. - 0.20.x (includes release 0.20.2) There are known bugs in append. The bugs may cause data loss. - 0.20-append There were effort on

Re: HDFS File Appending URGENT

2011-06-17 Thread jagaran das
Thanks a lot guys. Another query for production. Do we have any way by which we can purge the hdfs job and history logs on time basis. For example we want to keep only last 30 days log and its size is increasing a lot in production. Thanks again Regards, Jagaran

Starting an HDFS node (standalone) programmatically by API

2011-06-17 Thread punisher
Hi all, hdfs nodes can be started using the sh scripts provided with hadoop. I read that it's all based on script files is it possible to start an HDFS (standalone) from a java application by API? Thanks -- View this message in context: