016 7:57 PM
To: user@hbase.apache.org
Subject: Re: what causes hbase client to open large number of connectoins?
They are 30-40 sec apart. Can you tell us how did you come up with 14K
connections?
-Vlad
On Fri, Sep 9, 2016 at 4:52 PM, Frank Luo <j...@merkleinc.com> wrote:
> I have observed a ver
le the one on the remote connection barely has anything, hence
not able to get a value of "baseZNode".
So based on this theory, CopyTable will never work if the remote is a secured
cluster, is that a right assessment? Does anyone have luck to get it work?
-Original Message-
@hbase.apache.org
Subject: Re: CopyTable fails on copying between two secured clusters
Check the peer address you specified in the command line. It does not seem to
match your remote cluster ZK parent node.
Jerry
On Thursday, September 8, 2016, Frank Luo <j...@merkleinc.com> wrote:
> I do
zookeeper.security.auth_to_local.
--
Cloudera, Inc.
On Thu, Sep 8, 2016 at 10:50 AM, Frank Luo <j...@merkleinc.com> wrote:
> Thanks Esteban for replying.
>
> The Kerberos realm is shared between the two clusters.
>
> I searched zookeeper config and couldn't find the rule,
you have ZK to use the proper rules
via the -Dzookeeper.security.auth_to_local flag? If you could share additional
logs that would be helpful for us.
Thanks!
esteban.
--
Cloudera, Inc.
On Thu, Sep 8, 2016 at 10:32 AM, Frank Luo <j...@merkleinc.com> wrote:
> I couldn’t mana
I couldn’t manage to get the CopyTable to work between two secured clusters and
hope someone can shed some light.
I have a table created on both clusters, and I am running the CopyTable command
from the source cluster.
The parameters are the following:
"--peer.adr=zookeeper1,
It will work, but it is pretty awkward way to create more mappers.
From: Billy Watson [mailto:williamrwat...@gmail.com]
Sent: Wednesday, July 13, 2016 3:57 PM
To: Frank Luo <j...@merkleinc.com>
Cc: user@hbase.apache.org
Subject: Re: Re:is possible to create multiple TableSplit per
It makes a number of web-service calls.
From: Billy Watson [mailto:williamrwat...@gmail.com]
Sent: Wednesday, July 13, 2016 3:27 PM
To: user@hbase.apache.org
Cc: Frank Luo <j...@merkleinc.com>
Subject: Re: Re:is possible to create multiple TableSplit per region?
What do you mean by "
to that route.
From: 陆巍 [mailto:luwei...@163.com]
Sent: Wednesday, July 13, 2016 11:24 AM
To: user@hbase.apache.org; Frank Luo <j...@merkleinc.com>
Subject: Re:is possible to create multiple TableSplit per region?
here is an archived mail:
http://mail-archives.apache.org/mod_mbox/hbas
We have mapper only jobs operating on a result of a Scan. Because of heavy work
downstream, the mapper runs fairly slowly. So I am wondering if there is a way
to create multiple TableSplit on one region hence multiple mappers can be
created to work on different piece of date on the region.
I
What if you manually trigger major-compact on that particular region? Does it
run and the delete markers removed?
-Original Message-
From: Tianying Chang [mailto:tych...@gmail.com]
Sent: Friday, May 27, 2016 4:33 PM
To: user@hbase.apache.org
Subject: Major compaction cannot remove
. Traffic does fall
some at night but there is no real downtime.
It is not user facing load though so we could I guess turn off traffic for a
while as data queues up in Kafka. But not too long as then we're playing catch
up.
Saad
On Friday, April 29, 2016, Frank Luo <j...@merkleinc.com>
:
http://pastebin.com/NfUjva9R
FYI
On Fri, Apr 29, 2016 at 3:29 PM, Frank Luo <j...@merkleinc.com> wrote:
> Saad,
>
> Will all your tables/regions be used 24/7, or at any time, just a part
> of regions used and others are running ideal?
>
> If latter, I developed a tool
Saad,
Will all your tables/regions be used 24/7, or at any time, just a part of
regions used and others are running ideal?
If latter, I developed a tool to launch major-compact in a "smart" way, because
I am facing a similar issue. https://github.com/jinyeluo/smarthbasecompactor.
It looks at
I wrote a small program to do MC in a "smart" way here:
https://github.com/jinyeluo/smarthbasecompactor/
Instead of blindly running MC on a table level, the program find a non-hot
regions that has most store-files on a per region-server base, and run MC on
them. Once done, it finds the next
;.
3. Your heap size is also too big. Maybe you also run into GC issues. Have you
checked your GC logs?
4. IMO, writes getting blocks at 9 files might be very less for a big Region
Server. So, you can also consider increasing that.
On Fri, Mar 18, 2016 at 10:22 AM, Frank Luo <j...@merkl
,
It might be doable.
What HBase version are you running?
JMS
2016-03-18 12:25 GMT-04:00 Frank Luo <j...@merkleinc.com>:
> No one has experience disabling tables?
>
> -Original Message-
> From: Frank Luo [mailto:j...@merkleinc.com]
> Sent: Thursday, March 17, 20
doing compact myself so that is not an issue.
Another related question, if a region is enabled but not active read/write, how
much resources it takes in terms of region server?
Thanks!
Frank Luo
Merkle was named a leader in Customer Insights Services Providers by Forrester
Research
<h
, Frank Luo <j...@merkleinc.com> wrote:
> There are two reasons I am hesitating going that route.
>
> One is that most of tables are fairly small. Going to 10GB will force
> tables to shrink to some nodes but not evenly distributed around the
> cluster, hence discouraging paralleli
No one has experience disabling tables?
-Original Message-
From: Frank Luo [mailto:j...@merkleinc.com]
Sent: Thursday, March 17, 2016 4:51 PM
To: user@hbase.apache.org
Subject: is it a good idea to disable tables not currently hot?
We have a multi tenants environment and each client
2016-03-18 12:32 GMT-04:00 Frank Luo <j...@merkleinc.com>:
> 0.98 on hdp 2.2 currently.
>
> Soon will be on hdp2.3.4, which has HBase 1.1.2.
>
> -Original Message-
> From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org]
> Sent: Friday, March 18, 2016 11:29
2016 at 8:49 AM, Frank Luo <j...@merkleinc.com> wrote:
> Akmal,
>
> We have been suffering the issue for two years now without a good
> solution. From what I learned, it is not really a good idea to do
> heavy online hbase puts. The first thing you encounter will be
> perfo
compact/split by yourself. Meaning when # of files per region reaches
certain number, stops writing, performs compacts and splits with large
regions; then resume writing.
I hope it helps.
Frank Luo
From: Akmal Abbasov [mailto:akmal.abba...@icloud.com]
Sent: Tuesday, March 08, 2016 10:29 AM
OK, I understand that due to https://issues.apache.org/jira/browse/HBASE-13788,
Ø t.get "rowId", {COLUMN=>'K:Rec:3A1010343347'}
because there is a colon in the qualifier as “Rec:3A1010343347”. It seems
that I cannot even do “Rec\3A1010343347”.
However, is there a work around
HBASE-13825
Cheers
On Wed, Feb 3, 2016 at 10:41 AM, Frank Luo <j...@merkleinc.com> wrote:
> I want to capture the result and use excel to do some analysis.
>
> I can use filters if no other choices.
>
> Does the problem exist in the newer version? I have seen a number of
> r
the columns
to be returned.
Not sure what you would do with 3MM columns.
On Wed, Feb 3, 2016 at 10:23 AM, Frank Luo <j...@merkleinc.com> wrote:
> I am trying to “get” a very flat row, meaning one row has 3MM columns,
> from “hbase shell”, it fails with message:
>
>
>ERROR: Pro
I am trying to “get” a very flat row, meaning one row has 3MM columns, from
“hbase shell”, it fails with message:
ERROR: Protocol message was too large. May be malicious. Use
CodedInputStream.setSizeLimit() to increase the size limit.
Anyone knows how to resolve the issue? I am on
If the table is not pre-split properly and timeout not increased, then region
server will crash when compacting.
-Original Message-
From: Kumiko Yada [mailto:kumiko.y...@ds-iq.com]
Sent: Tuesday, December 22, 2015 1:01 PM
To: user@hbase.apache.org
Subject: Put performance test
Hello,
I
I am in a very similar situation.
I guess you can try one of the options.
Option one: avoid online insert by preparing data off-line. Do something like
http://hbase.apache.org/0.94/book/ops_mgt.html#importtsv
Option two: If the first option doesn’t work for you. It will be better to
reduce
BinaryComparator(Bytes.toBytes(200));
System.out.println(c.compareTo(Bytes.toBytes(201))); //
returns
-1
On Thu, Jul 18, 2013 at 4:28 PM, Frank Luo j...@merkleinc.com wrote:
That requires creating my own ByteArrayComparable class and
deploy to
all
servers
I don't think it is possible, but would like to confirm with smart folks out
there.
Supposing I have a cell storing an integer but in string presentation. For
example, here is how I put a value of 200:
put.add(family, qualifier, Bytes.toBytes(200));
Now, I want to scan with a filter
ByteArrayComparable comparator) { Cheers
On Thu, Jul 18, 2013 at 1:20 PM, Frank Luo j...@merkleinc.com wrote:
I don't think it is possible, but would like to confirm with smart
folks out there.
Supposing I have a cell storing an integer but in string presentation.
For example, here is how I put
32 matches
Mail list logo