Re: Hadoop(version 2.4.1) is a symbolic link support?

2014-07-14 Thread Akira AJISAKA
Hadoop 2.4.1 doesn't support symbolic link. (2014/07/14 11:34), cho ju il wrote: My hadoop version is 2.4.1. Hdfs(version 2.4.1) is a symbolic link support? How do I create symbolic links?

Block should be additionally replicated on 1 more rack(s)

2014-07-14 Thread 风雨无阻
HI all: After the cluster configuration rack awareness,run hadoop fsck / A lot of the following error occurred: Replica placement policy is violated for blk_-1267324897180563985_11130670. Block should be additionally replicated on 1 more rack(s). Online said The reason is that three

Re: Block should be additionally replicated on 1 more rack(s)

2014-07-14 Thread Yehia Elshater
Hi, Did you try Hadoop rebalancer ? http://hadoop.apache.org/docs/r1.0.4/hdfs_user_guide.html#Rebalancer On 14 July 2014 04:10, 风雨无阻 232341...@qq.com wrote: HI all: After the cluster configuration rack awareness,run hadoop fsck / A lot of the following error occurred: Replica

回复: Block should be additionally replicated on 1 more rack(s)

2014-07-14 Thread 风雨无阻
HI , I didn't try Hadoop rebalancer 。Because I remember rebalancer only considers disk load, and won't consider that data blocks which rack 。 I can try 。Thank you for your reply 。‍ -- 原始邮件 -- 发件人: Yehia Elshater;y.z.elsha...@gmail.com; 发送时间:

changing split size in Hadoop configuration

2014-07-14 Thread Jan Warchoł
Hello, I recently got Split metadata size exceeded 1000 error when running Cascading jobs with very big joins. I found that I should change mapreduce.jobtracker.split.metainfo.maxsize property in hadoop configuration by adding this to the mapred-site.xml file: property !-- allow more

OIV Compatiblity

2014-07-14 Thread Ashish Dobhal
Hey everyone ; Could anyone tell me how to use the OIV tool in hadoop 1.0 as there is no hdfs.sh file there. Thanks.

Re: changing split size in Hadoop configuration

2014-07-14 Thread Adam Kawa
It sounds like JobTracker setting, so the restart looks to be required. You verify it in pseudo-distributed mode by setting it to a very low value, restarting JT and seeing if you get the exception that prints this new value. Sent from my iPhone On 14 jul 2014, at 16:03, Jan Warchoł

Not able to place enough replicas

2014-07-14 Thread Bogdan Raducanu
I'm getting this error while writing many files. org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to place enough replicas, still in need of 4 to reach 4 I've set logging to DEBUG but still there is no reason printed. There should've been a reason after this line but

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
The OIV for 1.x series is available in release 1.2.0 and higher. You can use it from the 'hadoop oiv' command. It is not available in 1.0.x. On Mon, Jul 14, 2014 at 9:49 PM, Ashish Dobhal dobhalashish...@gmail.com wrote: Hey everyone ; Could anyone tell me how to use the OIV tool in hadoop 1.0

Re: OIV Compatiblity

2014-07-14 Thread Ashish Dobhal
Harsh thanks On Mon, Jul 14, 2014 at 11:39 PM, Harsh J ha...@cloudera.com wrote: The OIV for 1.x series is available in release 1.2.0 and higher. You can use it from the 'hadoop oiv' command. It is not available in 1.0.x. On Mon, Jul 14, 2014 at 9:49 PM, Ashish Dobhal

Re: OIV Compatiblity

2014-07-14 Thread Ashish Dobhal
Could I download the fsimage of a hadoop 1.0 using wget and then interpret it in offline mode using the tool in the hadoop 1.2 or higher distributions.I guess the structure of fsimage would be same for both the distributions. On Mon, Jul 14, 2014 at 11:53 PM, Ashish Dobhal

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
Sure, you could try that. I've not tested that mix though, and OIV relies on some known formats support, but should hopefully work. On Mon, Jul 14, 2014 at 11:56 PM, Ashish Dobhal dobhalashish...@gmail.com wrote: Could I download the fsimage of a hadoop 1.0 using wget and then interpret it in

Re: OIV Compatiblity

2014-07-14 Thread Ashish Dobhal
Sir I tried it it works. Are there any issues in downloading the gsimage using wget. On Tue, Jul 15, 2014 at 12:17 AM, Harsh J ha...@cloudera.com wrote: Sure, you could try that. I've not tested that mix though, and OIV relies on some known formats support, but should hopefully work. On

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
There shouldn't be any - it basically streams over the existing local fsimage file. On Tue, Jul 15, 2014 at 12:21 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Sir I tried it it works. Are there any issues in downloading the gsimage using wget. On Tue, Jul 15, 2014 at 12:17 AM, Harsh J

Re: clarification on HBASE functionality

2014-07-14 Thread Ted Yu
Yes. See http://hbase.apache.org/book.html#arch.hdfs On Mon, Jul 14, 2014 at 2:52 PM, Adaryl Bob Wakefield, MBA adaryl.wakefi...@hotmail.com wrote: HBASE uses HDFS to store it's data correct? B.

Re: clarification on HBASE functionality

2014-07-14 Thread Adaryl Bob Wakefield, MBA
Now this is different from Cassandra which does NOT use HDFS correct? (Sorry. Don’t know why that needed two emails.) B. From: Ted Yu Sent: Monday, July 14, 2014 4:53 PM To: mailto:user@hadoop.apache.org Subject: Re: clarification on HBASE functionality Yes. See

Re: clarification on HBASE functionality

2014-07-14 Thread Ted Yu
Right. hbase is different from Cassandra in this regard. On Mon, Jul 14, 2014 at 2:57 PM, Adaryl Bob Wakefield, MBA adaryl.wakefi...@hotmail.com wrote: Now this is different from Cassandra which does NOT use HDFS correct? (Sorry. Don’t know why that needed two emails.) B. *From:* Ted

Re: Not able to place enough replicas

2014-07-14 Thread Yanbo Liang
Maybe the user 'test' has no privilege of write operation. You can refer the ERROR log like: org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:test (auth:SIMPLE) 2014-07-15 2:07 GMT+08:00 Bogdan Raducanu lrd...@gmail.com: I'm getting this error while writing many

default 8 mappers per host ?

2014-07-14 Thread Sisu Xi
Hi, all: I configured a hadoop cluster with 9 hosts, each with 2 VCPU and 4G Ram. I noticed when I run the example pi program, only when I configure it with at least 8*9=72 mappers will all hosts be busy. Which means there is a default 8 mappers per host? How is this value decided? And where