Hadoop 0.20.0, xml parsing related error

2009-06-24 Thread murali krishna
Hi, Recently migrated to hadoop-0.20.0 and I am facing https://issues.apache.org/jira/browse/HADOOP-5254 Failed to set setXIncludeAware(true) for parser org.apache.xerces.jaxp.documentbuilderfactoryi...@1e9e5c73:java.lang.UnsupportedOperationException: This parser does not support

Re: Hadoop 0.20.0, xml parsing related error

2009-06-24 Thread Murali Krishna. P
Thanks Ram, Setting that system property solved the issue. Thanks, Murali Krishna From: Ram Kulbak ram.kul...@gmail.com To: core-user@hadoop.apache.org Sent: Wednesday, 24 June, 2009 7:29:37 PM Subject: Re: Hadoop 0.20.0, xml parsing related error Hi

Could not obtain block error

2008-10-29 Thread murali krishna
Hi, When I try to read one of the file from dfs, I get the following error in an infinite loop (using 0.15.3) “08/10/28 23:43:15 INFO fs.DFSClient: Could not obtain block blk_5994030096182059653 from any node: java.io.IOException: No live nodes contain current block” Fsck showed that the

Re: Could not obtain block error

2008-10-29 Thread Murali Krishna
Thanks Raghu, But, both block file and the .meta file are 0 sized files!! Thanks, Murali On 10/30/08 12:16 AM, Raghu Angadi [EMAIL PROTECTED] wrote: One work around for you is to go to the datanode and remove the .crc file for this block (find /datanodedir -name

distcp skipping the file

2008-07-22 Thread Murali Krishna
Hi, My source folder has a single folder and a single file inside that. /user/user/distcpsrc/1/2 r 3 4 2008-07-22 04:22 In the destination, it is creating the folder '1' but not the file '2'. The counters show 1 file has been skipped. 08/07/22 04:22:36 INFO mapred.JobClient:

Is there a way to preempt the initial set of reduce tasks?

2008-07-16 Thread Murali Krishna
Hi, I have to run a small MR job while there is a bigger job already running. The first job takes around 20 hours to finish and the second 1 hour. The second job will be given a higher priority. The problem here is that the first set of reducers of job1 will be occupying all the slots and will

DFS behavior when the disk goes bad

2008-04-08 Thread Murali Krishna
Hi, We had a bad disk issue in one of the box and I am seeing some strange behaviour. Just wanted to confirm whether this is expected.. * We are running a small cluster with 10 data nodes and a name node * Each data node has 6 disks * While a job was running,