Re: while doing mapreducing with hbase the follwing error came even

2013-01-07 Thread Harsh J
Seems to be an issue with your web-application and not HBase. Reading over http://stackoverflow.com/questions/1858463/java-error-only-a-type-can-be-imported-xyz-resolves-to-a-package may help you. On Mon, Jan 7, 2013 at 12:12 PM, gopi.l hbigdata.g...@gmail.com wrote: n error occurred at line: 6

Re: datanodes not sending report

2013-01-07 Thread prem yadav
Sorry. I should have sent it to the hadoop list. We have got the issue resolved. The issue was: earlier hadoop was picking up dfs.tmp.dir/dfs/data as the dfs dir. Later when we specified the dfs.data.dir property in the config, hadoop did not append /dfs/data to the path and the datanode was

Re: One weird problem of my MR job upon hbase table.

2013-01-07 Thread Doug Meil
Hi there, The HBase RefGuide has a comprehensive case study on such a case. This might not be the exact problem, but the diagnostic approach should help. http://hbase.apache.org/book.html#casestudies.slownode On 1/4/13 10:37 PM, Liu, Raymond raymond@intel.com wrote: Hi I encounter

RE: HBase - Secondary Index

2013-01-07 Thread Anoop Sam John
Hi, It is inverted index based on column(s) value(s) It will be region wise indexing. Can work when some one knows the rowkey range or NOT. -Anoop- From: Mohit Anchlia [mohitanch...@gmail.com] Sent: Monday, January 07, 2013 9:47 AM To:

long running query

2013-01-07 Thread Nurettin Şimşek
Hi, I'm running a query on cluster and result list can be too large. When running query(for example on 5.Node), 5.Node stopped. I looked at log files and I got an error message as java.lang.OutOfMemoryError: java heap space -XX:OnOutOfMemoryError=Kill -9 %p Executing /bin/sh -c kill -9 28321

Re: long running query

2013-01-07 Thread Devaraj Das
Hey Nurettin, It will be good if you can give us some details on the configuration. What is the heap size of the regionserver set to. Devaraj On Jan 7, 2013 7:39 AM, Nurettin Şimşek nurettinsim...@gmail.com wrote: Hi, I'm running a query on cluster and result list can be too large. When

Re: One weird problem of my MR job upon hbase table.

2013-01-07 Thread Michael Segel
Where did he mention he was attempting to bond the ports? Sorry if I missed it? On Jan 7, 2013, at 7:37 AM, Doug Meil doug.m...@explorysmedical.com wrote: Hi there, The HBase RefGuide has a comprehensive case study on such a case. This might not be the exact problem, but the diagnostic

Re: Tune MapReduce over HBase to insert data

2013-01-07 Thread Ted Yu
Have you read through http://hbase.apache.org/book.html#performance ? What version of HBase are you using ? Cheers On Mon, Jan 7, 2013 at 9:05 PM, Farrokh Shahriari mohandes.zebeleh...@gmail.com wrote: Hi there I have a cluster with 12 nodes that each of them has 2 core of CPU. Now,I want

Re: Tune MapReduce over HBase to insert data

2013-01-07 Thread Farrokh Shahriari
Hi again, I'm using HBase 0.92.1-cdh4.0.0. I have two server machine with 48Gb RAM,12 physical core 24 logical core that contain 12 nodes(6 nodes on each server). Each node has 8Gb RAM 2 VCPU. I've set some parameter that get better result like set WAL=off on put,but some parameters like

Re: Tune MapReduce over HBase to insert data

2013-01-07 Thread Ted Yu
Have you tuned the JVM parameter of hbase ? If you have Ganglia, did you observe high variation in network latency on the 6 nodes ? HBase 0.92.2 has been released. Do you plan to upgrade to 0.92.2 or 0.94.3 ? Cheers On Mon, Jan 7, 2013 at 9:38 PM, Farrokh Shahriari

Re: Tune MapReduce over HBase to insert data

2013-01-07 Thread Ted Yu
Please take a look at http://hbase.apache.org/book.html#jvm Section 12.2.3, “JVM Garbage Collection Logs”http://hbase.apache.org/book.html#trouble.log.gcshould be read as well. There is more recent effort to reduce GC activity. Namely HBASE-7404 Bucket Cache:A solution about CMS,Heap Fragment