Hi,
I was running coprocessor AggregationProtocol, it got socket
timeout exception. Somebody said to set hbase.rpc.timeout to larger
value.
I added the following in hbase.site.xml
property
namehbase.rpc.timeout/name
value300/value
/property
but it was not working. The
Thanks for the responses. I am using 0.90.4-cdh3. i exported the table
using hbase exporter. Yes, the previous table still exists but on a
different cluster.My region servers are large, close to 12GB in size.
I want to understand regarding Hfiles. We export the table as a series of
Hfiles and
You must rewrite you own ruby script to support command like max, min
after deploy your endpoint.
On Fri, Mar 30, 2012 at 5:29 PM, NNever nnever...@gmail.com wrote:
Can I call endpoint from HBase Shell? and how~
Thanks~
--
Best wishes!
My Friend~
On 03/30/2012 04:54 AM, Rita wrote:
Thanks for the responses. I am using 0.90.4-cdh3. i exported the table
using hbase exporter. Yes, the previous table still exists but on a
different cluster.My region servers are large, close to 12GB in size.
Which is the total number of your regions?
I
On Thu, Mar 29, 2012 at 5:07 PM, Juhani Connolly juha...@gmail.com wrote:
Mind posting some regionserver logs while its under load?
Attached. This is while running 100 threads making autoFlushed writes.
The throughput on this regionserver is about 1000 wps
They did not come across Juhani.
The timeout is on the client side, not the server side.
J-D
On Fri, Mar 30, 2012 at 12:11 AM, Balaji k balaji.kan...@gmail.com wrote:
Hi,
I was running coprocessor AggregationProtocol, it got socket
timeout exception. Somebody said to set hbase.rpc.timeout to larger
value.
I added
Just as a quick reminder regarding what Todd mentioned, that's exactly
what was happening in this case study...
http://hbase.apache.org/book.html#casestudies.slownode
... although it doesn't appear to be the problem in this particular
situation.
On 3/29/12 8:22 PM, Juhani Connolly
Hello,
On Fri, Mar 30, 2012 at 07:17:27PM +0200, Roberto Alonso wrote:
property
namemapred.system.dir/name
value/mapred/system/value
/property
When I put and start the tasktracker:
sudo service hadoop-0.20-tasktracker start
The log
Hi All,
I am using cdh3u2. I ran HBase bulk loading with property
mapred.reduce.tasks.speculative.execution set to false in
mapred-site.xml. Still, i can see 6 killed task in Bulk Loading job and
after short analysis i realized that these jobs are killed because another
worker node completed the
This is a client-side configuration so if your mapred-site.xml is
_not_ on your classpath when you start the bulk load, it's not going
to pick it up. So either have that file on your classpath, or put it
in whatever other configuration file you have.
J-D
On Fri, Mar 30, 2012 at 2:52 PM, anil
Well that's not an HBase configuration, that's Hadoop. I'm not sure if
this is listed anywhere, maybe in the book.
BTW usually HBase has a client somewhere in the same to indicate
it's client side.
J-D
On Fri, Mar 30, 2012 at 3:08 PM, anil gupta anilg...@buffalo.edu wrote:
Thanks for the quick
Speculative execution is on by default in Hadoop. One of the Performance
recommendations in the Hbase RefGuide is to turn it off.
On 3/30/12 6:12 PM, Jean-Daniel Cryans jdcry...@apache.org wrote:
Well that's not an HBase configuration, that's Hadoop. I'm not sure if
this is listed
Hi Doug,
Yes, that's why i had set that property as false in my mapred-site.xml.
But, to my surprise i didnt know that setting that property would be
useless for Hadoop jobs unless the mapred-site.xml is in classpath. The
idea of client side property is a little confusing to me at present since
Hi All,
I am using cdh3u2 and i have 7 worker nodes(VM's spread across two
machines) which are running Datanode, Tasktracker, and Region Server(1200
MB heap size). I was loading data into HBase using Bulk Loader with a
custom mapper. I was loading around 34 million records and I have loaded
the
Anil,
Can you please attach the RS logs from the failure?
On Fri, Mar 30, 2012 at 7:05 PM, anil gupta anilg...@buffalo.edu wrote:
Hi All,
I am using cdh3u2 and i have 7 worker nodes(VM's spread across two
machines) which are running Datanode, Tasktracker, and Region Server(1200
MB heap
Hi Kevin,
Here is dropbox link to the log file of region server which failed:
http://dl.dropbox.com/u/64149128/hbase-hbase-regionserver-ihub-dn-b1.out
IMHO, the problem starts from the line #3009 which says: 12/03/30 15:38:32
FATAL regionserver.HRegionServer: ABORTING region server
Tks, I'll try
2012/3/30 shixing paradise...@gmail.com
You must rewrite you own ruby script to support command like max, min
after deploy your endpoint.
On Fri, Mar 30, 2012 at 5:29 PM, NNever nnever...@gmail.com wrote:
Can I call endpoint from HBase Shell? and how~
Thanks~
--
Anil,
You can also disable speculative execution on a per-job basis. See
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapreduce/Job.html#setMapSpeculativeExecution(boolean)
(Which is why it is called a client-sided property - it applies
per-job).
If HBase strongly
18 matches
Mail list logo