Because of some compatibility issues, we decide that this will be done
in 2.0 only.. Ya as Andy said, it would be great to share the 1.x
backported patches. Is it a mega patch at ur end? Or issue by issue
patches? Latter would be best. Pls share patches in some place and a
list of issues
Can you tell us the version of hbase you are using and the new version
which you plan to upgrade to ?
A bit more detail on your coprocessor would also help narrow the scope of
search.
Cheers
On Fri, Nov 18, 2016 at 4:28 PM, Albert Shau
wrote:
> Hi all,
> I'm
Hi all,
I'm using coprocessors with my tables and am wondering how I would perform an
HBase rolling upgrade, since it seems like there are no compatibility
guarantees for the coprocessor APIs. I'm guessing I would have to disable the
table, alter the table to use a coprocessor compatible with
Thanks Ted. Yes, that worked great.
On Fri, Nov 18, 2016 at 10:51 AM, Ted Yu wrote:
> https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/
> client/Admin.html#getTableDescriptor-org.apache.hadoop.hbase.TableName-
>
> On Fri, Nov 18, 2016 at 10:44 AM, Ganesh
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#getTableDescriptor-org.apache.hadoop.hbase.TableName-
On Fri, Nov 18, 2016 at 10:44 AM, Ganesh Viswanathan
wrote:
> Hello,
>
> Is there a java API for HBase (using Admin or other libraries) to describe
+ 1
原始邮件
发件人: 张铎
收件人: d...@hbase.apache.org;
user@hbase.apache.org
发送时间: 2016年11月18日(周五) 17:19
主题: Re: Use experience and performance data of offheap from Alibaba online
cluster
正在载入邮件原文…
Hello,
Is there a java API for HBase (using Admin or other libraries) to describe
a table and get its columnFamilies? I see the *describe 'tablename'* shell
command in HBase. But I don't see one in the Admin API docs:
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html
On Fri, Nov 18, 2016 at 7:46 AM, Graham Baecher
wrote:
> Hi all,
>
> We're looking to update the HBase version we're running on our servers, but
> noticed while comparing performance test results a the previous version
> that Puts now seem to be slower.
> Specifically,
Yes, please, the patches will be useful to the community even if we decide not
to backport into an official 1.x release.
> On Nov 18, 2016, at 12:25 PM, Bryan Beaudreault
> wrote:
>
> Is the backported patch available anywhere? Not seeing it on the referenced
>
Is the backported patch available anywhere? Not seeing it on the referenced
JIRA. If it ends up not getting officially backported to branch-1 due to
2.0 around the corner, some of us who build our own deploy may want to
integrate into our builds. Thanks! These numbers look great
On Fri, Nov 18,
Hi Yu Li
Good to see that the off heap work help you.. The perf
numbers looks great. So this is a compare of on heap L1 cache vs off heap
L2 cache(HBASE-11425 enabled). So for 2.0 we should make L2 off heap
cache ON by default I believe. Will raise a jira for that we can
Thanks Jeremy.
I ran the regressed workload this morning with FIFO enabled and though it
was faster, it looks like it was a flat ~1ms faster for both reads and
writes. This doesn't compensate for the write performance dropoff, so even
with FIFO our 5.9 writes are slower than 5.8.
On Fri, Nov 18,
Sorry guys, let me retry the inline images:
Performance w/o offheap:
Performance w/ offheap:
Peak Get QPS of one single RS during Singles' Day (11/11):
And attach the files in case inline still not working:
Performance_without_offheap.png
Hi,
I have table in Hbase which stores multiple versions of data in different rows.
The key is something like . The timestamp
will differ for multiple versions of the same document.
Orgs are skewed say one org may have 1 billion docs while some orgs may have
just 100K docs.
So I decided to do
No worries.
This is the spark version we are using: 1.5.0-cdh5.5.4
I have to use Hbase context, it is the first parameter for the method I am
using to generate the HFiles (HbaseRDDFunctions.hbaseBulkLoadThinRows)
On Fri, 18 Nov 2016 at 16:06 Nkechi Achara wrote:
>
Use FIFO. Much better in our testing
On Fri, Nov 18, 2016 at 7:46 AM Graham Baecher wrote:
> Hi all,
>
> We're looking to update the HBase version we're running on our servers, but
> noticed while comparing performance test results a the previous version
> that Puts now
Sorry on my way to a flight.
Read is required for a keytab to be permissioned properly. So that looks
fine in your case.
I do not have my PC with me, but have you tried to use Hbase without using
Hbase context.
Also which version of Spark are you using?
On 18 Nov 2016 16:01, "Abel Fernández"
Hi all,
We're looking to update the HBase version we're running on our servers, but
noticed while comparing performance test results a the previous version
that Puts now seem to be slower.
Specifically, comparing HBase 1.2.0-cdh5.9.0 to 1.2.0-cdh5.8.0 using YCSB,
it looks like mean read latency
Yep, the keytab is also in the driver into the same location.
-rw-r--r-- 1 hbase root 370 Nov 16 17:13 hbase.keytab
Do you know what are the permissions that the keytab should have?
On Fri, 18 Nov 2016 at 14:19 Nkechi Achara wrote:
> Sorry just realised you had the
Sorry just realised you had the submit command in the attached docs.
Can I ask if the keytab is also on the driver in the same location?
The spark option normally requires the keytab to be on the driver so it can
pick it up and pass it to yarn etc to perform the kerberos operations.
On 18 Nov
Hi Nkechi,
Thank for your early response.
I am currently specifying the principal and the keytab in the spark-submit,
the keytab is in the same location in every node manager.
SPARK_CONF_DIR=conf-hbase spark-submit --master yarn-cluster \
--executor-memory 6G \
--num-executors 10 \
Can you use the principal and keytab options in Spark submit? These should
circumvent this issue.
On 18 Nov 2016 1:01 p.m., "Abel Fernández" wrote:
> Hello,
>
> We are having problems with the delegation of the token in a secure
> cluster: Delegation Token can be issued
Hi,
I will all depends of the number of columns, the performance of your
servers, if you are doing the scan in parallel across all the regions at
the same time, or not, the processing you will do, etc. So it is not
possible to give you any estimate. You will have to test it to figure it.
JMS
Hello,
We are having problems with the delegation of the token in a secure
cluster: Delegation Token can be issued only with kerberos or web
authentication
We have a spark process which is generating the hfiles to be loaded into
hbase. To generate these hfiles, (we are using a back-ported
Yu:
With positive results, more hbase users would be asking for the backport of
offheap read path patches.
Do you think you or your coworker has the bandwidth to publish backport for
branch-1 ?
Thanks
> On Nov 18, 2016, at 12:11 AM, Yu Li wrote:
>
> Dear all,
>
> We
I can not see the images either...
Du, Jingcheng 于2016年11月18日 周五16:57写道:
> Thanks Yu for the sharing, great achievements.
> It seems the images cannot be displayed? Maybe just me?
>
> Regards,
> Jingcheng
>
> From: Yu Li [mailto:car...@gmail.com]
> Sent: Friday, November
Nope, same here !
Loïc CHANEL
System Big Data engineer
MS - WASABI - Worldline (Villeurbanne, France)
2016-11-18 9:54 GMT+01:00 Du, Jingcheng :
> Thanks Yu for the sharing, great achievements.
> It seems the images cannot be displayed? Maybe just me?
>
> Regards,
>
Thanks Yu for the sharing, great achievements.
It seems the images cannot be displayed? Maybe just me?
Regards,
Jingcheng
From: Yu Li [mailto:car...@gmail.com]
Sent: Friday, November 18, 2016 4:11 PM
To: user@hbase.apache.org; d...@hbase.apache.org
Subject: Use experience and performance data of
Dear all,
We have backported read path offheap (HBASE-11425) to our customized
hbase-1.1.2 (thanks @Anoop for the help/support) and run it online for more
than a month, and would like to share our experience, for what it's worth
(smile).
Generally speaking, we gained a better and more stable
29 matches
Mail list logo