-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@69d892a1
On Wed, Sep 24, 2014 at 10:11 AM, tobe tobeg3oo...@gmail.com wrote:
It seems to be related to replication. When I disable the replication
from
the other clusters, the problem doesn't occur again. And I print the
DEBUG
, 2014 at 7:41 PM, tobe tobeg3oo...@gmail.com wrote:
@qiang I have read about this issue. Is it
https://issues.apache.org/jira/browse/HBASE-4495?
I look deep in to code and can't find the reason of this. Any suggestion
is
welcome.
On Mon, Sep 22, 2014 at 6:22 PM, Qiang Tian tian
You can `jstack` to find out the thread which is hanging.
On Tue, Sep 23, 2014 at 11:33 PM, Mingtao Zhang mail2ming...@gmail.com
wrote:
Hi,
I have hbase-client jar inside a tomcat doing some work. People in my group
report that tomcat shutdown is hanging for very rare chance (not possible
org.apache.hadoop.ipc.SecureRpcEngine: Call:
close 0
On Tue, Sep 23, 2014 at 2:27 PM, tobe tobeg3oo...@gmail.com wrote:
I mean I have no idea why the RegionServer has so many log about
establishing and closing session.
I don't think we have so many clients within every second. This happens
when
has a jira to remove catalog
tracker(forgot jira number...)
On Sat, Sep 20, 2014 at 7:38 PM, tobe tobeg3oo...@gmail.com wrote:
Here's the detailed log and you can see it establishes a few sessions for
every seconds. It doesn't happen in other clusters but you can see the
similar log
From: tobe tobeg3oo...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org
Sent: Thursday, September 18, 2014 1:50 AM
Subject: HBase establishes session with ZooKeeper and close the session
immediately
I have found that our RegionServers connect to the ZooKeeper
I have seen the similar log in someone's blog and it's based on 0.94.20.
The CatalogTracker seems to be initialized for many times.
watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@69d892a1
On Thu, Sep 18, 2014 at 4:50 PM, tobe tobeg3oo
I have found that our RegionServers connect to the ZooKeeper frequently.
They seems to constantly establish the session, close it and reconnect the
ZooKeeper. Here is the log for both server and client sides. I have no idea
why this happens and how to deal with it? We're using HBase 0.94.11 and
:64128/master-status
Looks like curl doesn't interpret the redirection.
Cheers
On Wed, Sep 3, 2014 at 8:20 PM, tobe tobeg3oo...@gmail.com wrote:
Now I understand the port of Jetty. But why the port 16010 doesn't
work?
Should I get meta HTTP-EQUIV=REFRESH
content=0;url
It's a little wired when I ran the standalone HBase cluster from trunk. I
notice that the default RegionServer info port is not 16010.
And when I explicitly set hbase.regionserver.info.port, it doesn't work. It
changes every time I run.
and it doesn't have this problem.
On Wed, Sep 3, 2014 at 10:54 PM, Ted Yu yuzhih...@gmail.com wrote:
Which release are you running ?
If you're running trunk build, from HConstants.java :
public static final int DEFAULT_MASTER_INFOPORT = 16010;
Cheers
On Wed, Sep 3, 2014 at 7:03 AM, tobe
hbase.cluster.distributed is
false
Is this what you saw ?
On Wed, Sep 3, 2014 at 5:52 PM, tobe tobeg3oo...@gmail.com wrote:
I clone the latest code from git://git.apache.org/hbase.git and run `mvn
clean package -DskipTests` to compile. Not change anything and run
`./bin/hbase master start`. I have
3, 2014, at 6:13 PM, tobe tobeg3oo...@gmail.com wrote:
That's wired. I get nothing with `lsof -n -i4TCP:16010`.
root@emacscode:/opt/hbase/bin# lsof -n -i4TCP:16010
root@emacscode:/opt/hbase/bin# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address
.
'curl 127.0.0.1:63235' would return:
meta HTTP-EQUIV=REFRESH content=0;url=/master-status/
Hope this helps.
On Wed, Sep 3, 2014 at 7:38 PM, tobe tobeg3oo...@gmail.com wrote:
Thank @ted. But I can only get HMaster info from port 57944. Could you
print out all the ports that standalone
..
Please find attached compile.log for logs of command mvn clean
package -DskipTests.
Which clearly say Building HBase 2.0.0-SNAPSHOT.
Let me know if I am wrong.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Fri, Aug 22, 2014 at 2:20 PM, tobe tobeg3oo
/
# directory which is created after
extracting hbase-2.0.0-SNAPSHOT-bin.tar.gz
HBASE_HOME=/usr/local/hbase/hbase-2.0.0-SNAPSHOT/
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Tue, Sep 2, 2014 at 2:35 PM, tobe tobeg3oo...@gmail.com wrote:
The default configuration should work well. Check
at the source-code , it seems like hbase.bucketcache.ioengine
must be set to one of these file: , offheap , heap.
it might help in debugging the issue .
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Tue, Sep 2, 2014 at 3:45 PM, @Sanjiv Singh sanjiv.is...@gmail.com
wrote:
Hi Tobe
hansi.kl...@web.de wrote:
Hi tobe,
yes we are replicating during verify.
So as I understand, the problem is that during the verify job the one key
is updated (with new timestamp)
while this key will be verified. So on one side the key timestamp is in
the verify timerange but on the
other side
It's the bug of VerifyReplication, please refer to [HBASE-10153]
https://issues.apache.org/jira/browse/HBASE-10153.
Need someone to review and fix it.
On Tue, Aug 26, 2014 at 4:10 PM, Hansi Klose hansi.kl...@web.de wrote:
Hi,
I run periodically verify jobs on our hbase tables.
One some
.
On Tue, Aug 26, 2014 at 6:25 PM, Hansi Klose hansi.kl...@web.de wrote:
Hi Tobe,
ok, this is why there are so many, but this means that in the job with
BADROWS
there is minimum one row missing at that time on one side.
Right?
Regards Hansi
Gesendet: Dienstag, 26. August 2014 um 10:20 Uhr
optimizations around this: we skip reading early if the
timestamps of what we're reading is not in the scan range. So we don't know
if there is a newer value.
What's the use case you're looking at?
Nicolas
On Tue, Aug 26, 2014 at 3:36 AM, tobe tobeg3oo...@gmail.com wrote
, 2014 at 7:56 PM, tobe tobeg3oo...@gmail.com wrote:
Thanks @nicolas, @andrew and @lars. The problem like this always comes
down to by design. It depends on the semantic that HBase provides.
As a user, I don't expect different results when I sent the same request
at the same time. I don't care
We have deployed a large HBase cluster in our production environment.
Sometimes a few region servers crash and we have supervisord to bring them
up. Occasionally it shows address always in use because the operation
system doesn't release the port immediately.
Recently I read something about
So far, I have found two problems about this.
Firstly, HBase-11675 https://issues.apache.org/jira/browse/HBASE-11675.
It's a little tricky and rarely happens. But it asks users to be careful of
compaction which occurs on server side. They may get different results
before and after the major
to consider mvcc and others, which seems more complex.
On Mon, Aug 25, 2014 at 5:54 PM, tobe tobeg3oo...@gmail.com wrote:
So far, I have found two problems about this.
Firstly, HBase-11675 https://issues.apache.org/jira/browse/HBASE-11675.
It's a little tricky and rarely happens. But it asks
when KEEP_DELETED_CELLS is
enabled for the column families.
From: tobe tobeg3oo...@gmail.com
To: hbase-dev d...@hbase.apache.org
Cc: user@hbase.apache.org user@hbase.apache.org
Sent: Monday, August 25, 2014 4:32 AM
Subject: Re: Should scan check
Keep updating. Thanks very much!
On Thu, Aug 21, 2014 at 9:28 AM, iain wright iainw...@gmail.com wrote:
As an admin constantly referring to these docs, Thank you!
--
Iain Wright
This email message is confidential, intended only for the recipient(s)
named above and may contain information
Sometimes our users want to upgrade their servers or move to a new
datacenter, then we have to migrate the data from HBase. Currently we
enable the replication from the old cluster to the new cluster, and run
CopyTable to move the older data.
It's a little inefficient. It takes more than one day
such that
they sort in the same order as the HBase key. Then read through those
results and the HBase scan at the same time, advancing both sides together.
That way you need only a single scan on both sides (per table).
-- Lars
From: tobe tobeg3oo...@gmail.com
@Ravi Do you mean using a key + timestamp as rowkey in HBase shell?
If so, you can `import java.text.SimpleDateFormat` to get the timestamp.
More detail on http://hbase.apache.org/book/shell_tricks.html.
On Wed, Aug 13, 2014 at 11:50 PM, Ted Yu yuzhih...@gmail.com wrote:
rowkey gets involved
://www.slideshare.net/wyaddow/data-verification-in-qa-department-final
Pretty old, but gives basic ideas. Nothing changed from that time.
2014-08-12 7:55 GMT+04:00 tobe tobeg3oo...@gmail.com:
Most of our users migrated their data form MySQL to HBase. Before they
totally trust HBase, they use MySQL and HBase
it.
On Tue, Aug 12, 2014 at 8:45 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Tobe,
Thing is, your data in HBase might be organize very differently than in
MySQL. Have you denormlized some of it? Have you used some Avro containers
into an HBase cell? Have you do any cleanup
Hi Colin,
Does your table contain some really large rows?
There're some errors when I copy a table with the rows which have 400K
columns. I have not tested the content but I'm shocked when you said you
were missing data with CopyTable.
On Wed, Aug 13, 2014 at 9:00 AM, Colin Kincaid Williams
Most of our users migrated their data form MySQL to HBase. Before they
totally trust HBase, they use MySQL and HBase at the same time. Sometimes
the data is inconsistent because they use it incorrectly or maybe there're
bugs of HBase. Anyway, we have to make sure the data from MySQL and HBase
is
I can't repro this problem when I ran CopyTable. You could just -du to
see the sizes of all files.
On Fri, Aug 8, 2014 at 8:50 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi Colin,
Just to make sure.
Is table A from the source cluster and not compressed, and table B in the
35 matches
Mail list logo