In my traces, I did not see any traces from client to either master or
regionserver. I have tried both deployment, pseudo-distributed on one
machine, and full-distributed on three machines (one client, one as hmaster
and zk, and one as regionserver). It only shows the following four spans in
my
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.solr/solr-core/4.7.1/org/apache/solr/update/processor/StatelessScriptUpdateProcessorFactory.java#StatelessScriptUpdateProcessorFactory.ScriptUpdateProcessor.invokeFunction%28java.lang.String%2Cjava.lang.Object%5B%5D%29
it looks better
I couldn't decide that whether it is an HBase question or Hadoop/Yarn.
In the utility class for MR jobs integerated with HBase,
*org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil, *
in the method:
*public static void initTableReducerJob(String table,*
*Class? extends TableReducer
Hi
We are running hbase 0.94.6 cdh 4.4 and have a problem with one table not
being assigned to any region. This is the SYSTEM.TABLE in Phoenix so all
tables are basically non functional at the moment.
When running hbck repair we get the following...
ERROR: Region { meta =
It seems that the region servers are complaining about wrong phoenix
classes for some reason. We are running 2.2.0 which is the version before
phoenix was moved to apache.
But looking at the regionserver logs are stuck complaining about
org.apache.phoenix.coprocessor.MetaDataEndpointImpl which IS
Hi,
We are running ZK 3.3.4, Cloudera cdh3u3, HBase 0.94.16.
ZK version is quite old. I could see ClientCnxn is only catching IOException
and when there is OOME it will exit SendThread.
I think, thats the reason for client hanging. Client side threaddump will help
us to see the liveliness of
Adding Phoenix dev@
On Thu, Aug 14, 2014 at 8:05 AM, Kristoffer Sjögren sto...@gmail.com
wrote:
It seems that the region servers are complaining about wrong phoenix
classes for some reason. We are running 2.2.0 which is the version before
phoenix was moved to apache.
But looking at the
The client-side thread dump in here:
http://pastebin.com/xU4MSq9k
SendThread appears to be active.
-Original Message-
From: Rakesh R [mailto:rake...@huawei.com]
Sent: Thursday, August 14, 2014 7:01 AM
To: d...@zookeeper.apache.org; user@hbase.apache.org
Subject: RE: HBase client hangs
Hi,
We are running CDH 4.5, 6 nodes with hbase cluster (0.94.6).
We configured major compaction to run once a week. During this time we are
getting timeouts while writing to hbase (application level).
When investigating it further more I have noticed that the timeout are being
caused by GC Stop
In my traces, I did not see any traces from client to either master or
regionserver. I have tried both deployment, pseudo-distributed on one
machine, and full-distributed on three machines (one client, one as hmaster
and zk, and one as regionserver). It only shows the following four spans in
Can send you JVM command line arguments (specifically how you tune the GC)?
-- Lars
From: yanivG yaniv.yancov...@gmail.com
To: user@hbase.apache.org
Sent: Thursday, August 14, 2014 9:01 AM
Subject: GC peaks during major compaction
Hi,
We are running CDH
Also, can you please provide details on the nodes configuration? What's the
heap size?
Thanks,
JM
2014-08-14 13:17 GMT-04:00 lars hofhansl la...@apache.org:
Can send you JVM command line arguments (specifically how you tune the GC)?
-- Lars
From:
Hi there
I have a use-case where I need to do a read to check if a hbase entry
is present, then I do a put to create the entry when it is not there.
I have a script to get a list of rowkeys from hive and put them on a
HDFS directory. Then I have a MR job that reads the rowkeys and do
batch
Hi Ted,
I've see this kind of client hangs few times when the underlying
environment is under heavy swapping and with older versions of ZK as Rakesh
mentioned, also when hbase.client.pause is set to 0. Do you know if your
environment is experiencing a similar behavior with heavy IO due swapping ?
Hello Thomas,
What version of HBase are you using? sorting and grouping based on the
regions the rows is going to help for sure. I don't think you should focus
too much in the locality side of the problem unless your HDFS input set is
too large (100s or 1000s of MBs per task), otherwise it might
Hi Esteban,
Thanks for sharing ideas.
We are on Hbase 0.96 and java 1.6. I have enabled short-circuit read,
and heap size is around 16G for each region server. We have about 20
of them.
The list of rowkeys that I need to process is about 10M. I am using
batch gets already and the batch size is
Thomas:
Have you set tcpnodelay to true ?
See http://hbase.apache.org/book.html for explanation of
hbase.ipc.client.tcpnodelay
Cheers
On Thu, Aug 14, 2014 at 11:41 AM, Thomas Kwan thomas.k...@manage.com
wrote:
Hi Esteban,
Thanks for sharing ideas.
We are on Hbase 0.96 and java 1.6. I
Hello Esteban-
At the time of the ZK connection problems the client had an OOM event. However,
the client machine overall was in fine shape looking at ganglia reports; it
certainly wasn't swapping or spending significant cycles on I/O wait.
Similarly, our zookeeper server was real chilled as
I'm not aware of specifically this experiment. You might have a look at our
HeapSize interface and it's implementations for things like HFileBlock.
On Tue, Aug 12, 2014 at 11:05 PM, abhishek1015 abhishek1...@gmail.com
wrote:
Hello everyone,
I am wondering if someone has experimentally
Hi,
Could you help me to find a guideline or recommendation for standing up
4-node HBase cluster?
I have read HBase in Action book and the book recommended not to have fewer
than 10 nodes in a production cluster. However, due to budget constraint,
we would like to begin with small cluster and
This is a fine role assignment if HA is not required. For true HBase HA
you'll need at least HA namenode, multiple HBase masters, and a zookeeper
quorum.
On Thursday, August 14, 2014, Dongsu Lee dongsulee2...@gmail.com wrote:
Hi,
Could you help me to find a guideline or recommendation for
Hello All-
It sounds like upgrading our zookeeper client would be a good idea. Can anyone
provide some guidelines on compatibility of HBase 0.94.16 with ZK 3.4.X? How
about compatibility of ZK client 3.4.X w/ ZK server 3.3.4? I've read a few
contradictory things about ZK client/server
Hello Ted,
ZooKeeper 3.4.5 is the recommended release to use in HBase 0.94.x,
regarding compatibility across ZooKeeper releases I don't think there is
any issue, but the ZK devs might be able to confirm.
cheers,
esteban.
--
Cloudera, Inc.
On Thu, Aug 14, 2014 at 3:19 PM, Ted Tuttle
Thanks, Nick!
On Thu, Aug 14, 2014 at 2:50 PM, Nick Dimiduk ndimi...@gmail.com wrote:
This is a fine role assignment if HA is not required. For true HBase HA
you'll need at least HA namenode, multiple HBase masters, and a zookeeper
quorum.
On Thursday, August 14, 2014, Dongsu Lee
On the first connection to the cluster when you've installed Phoenix
2.2.3 and were previously using Phoenix 2.2.2, Phoenix will upgrade
your Phoenix tables to use the new coprocessor names
(org.apache.phoenix.*) instead of the old coprocessor names
(com.salesforce.phoenix.*).
Thanks,
James
On
this is an interesting case. since sendthread is running fine, it caught
the error and called cleanup() correctly? so it looks the packet is not in
outgoingqueue nor in pendingqueue? perhaps 3.4.5 might have the issue has
well...a timeout could help this?
On Fri, Aug 15, 2014 at 6:30 AM,
Sometimes our users want to upgrade their servers or move to a new
datacenter, then we have to migrate the data from HBase. Currently we
enable the replication from the old cluster to the new cluster, and run
CopyTable to move the older data.
It's a little inefficient. It takes more than one day
What version of HBase? How are you running CopyTable? A day for 1.8T is not
what we would expect.
You can definitely take a snapshot and then export the snapshot to another
cluster, which will move the actual files; but CopyTable should not be so slow.
-- Lars
Say I have 100 split files on 10 region servers, and I did a major compact.
Will these split files be distributed like this:
reg1: [splits 1,2,..,10]
reg2: [splits 11,12,...,20]
...
Or like this:
reg1: [splits: 1, 11, 21, ... , 91]
reg2: [splits: 2, 12, 22, ... , 92]
...
And if I want to
29 matches
Mail list logo