ng for HDFS TDE, though. Pick the most
recent release. Slightly older versions (like 2.6.0) had fatal bugs if used in
conjunction with HBase.
On Thu, Jun 2, 2016 at 5:52 PM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Hi, all,
>
> We are trying to deploy the 'transparen
Hi, all,
We are trying to deploy the 'transparent encryption' feature of HBase ,
described in HBase reference guide:
https://hbase.apache.org/book.html#hbase.encryption.server , in our product.
We heard from various sources that it is not production ready before.
During our tests, we do find
example, we can setup a separate storage, like a NFS, which can
> be mounted to each node of HBase cluster, and we put the key there, is
> it an acceptable plan
No, this won't provide any additional protection over using local keystore
files.
> On Jun 6, 2016, at 9:07 AM, Liu, Ming (M
ke advantage of it.
(HBase TDE doesn't support the Hadoop KMS.)
Advice offered for what it's worth (smile)
On Thu, Jun 2, 2016 at 9:16 PM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Thank you Andrew!
>
> What we hear must be rumor :-) We are now confident to use this feature.
>
&g
uot; +
See also http://hbase.apache.org/book.html#arch.bulk.load.complete
FYI
On Sat, Feb 6, 2016 at 12:29 AM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Hello,
>
> I am trying to find a good way to import large amount of data into
> HBase from HDFS. I have a csv file about 135G originall
Hello,
I am trying to find a good way to import large amount of data into HBase from
HDFS. I have a csv file about 135G originally, I put it into HDFS, then I use
HBase's importtsv utility to do a bulkload, for that 135G original data, it
took 40 mins. I have 10 nodes, each has 128G, and all
e must
already exist in HBase\n" +
See also http://hbase.apache.org/book.html#arch.bulk.load.complete
FYI
On Sat, Feb 6, 2016 at 12:29 AM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Hello,
>
> I am trying to find a good way to import large amount of data into
Thanks Frank, this is something I am looking for. Would like to have a try with
it.
Thanks,
Ming
-邮件原件-
发件人: Frank Luo [mailto:j...@merkleinc.com]
发送时间: 2016年4月5日 1:38
收件人: user@hbase.apache.org
抄送: Sumit Nigam
主题: RE: Major compaction
I wrote a small program to
example
- prepare() and process().
On Tue, Aug 9, 2016 at 5:04 PM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Thanks Ted for pointing out this. Can this TableLockManager be used
> from a client? I am fine to migrate if this API change for each release.
> I am writing a cli
AM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Hi, all,
>
> I want to implement a simple 'table lock' in HBase. My current idea is
> for each table, I choose a special rowkey which NEVER be used in real
> data, and then use this row as a 'table level l
@InterfaceAudience.Private
Meaning the API may change across releases.
Cheers
On Tue, Aug 9, 2016 at 9:58 AM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> Thanks Ted for the questions.
>
> 1. what if the process of owner of the lock dies?
> I didn't think of this... This is really an issue. I
Hi, all,
I want to implement a simple 'table lock' in HBase. My current idea is for each
table, I choose a special rowkey which NEVER be used in real data, and then use
this row as a 'table level lock'.
The getLock() will be:
getLock(table, cf, cl)
{
rowkey = 0 ; //which never used
for hbase.hstore.compactionThreshold :
Larger values delay compaction, but when compaction does occur, it takes
longer to complete.
On Sat, Aug 26, 2017 at 6:48 AM, Liu, Ming (Ming) <ming@esgyn.cn> wrote:
> hi, all,
>
> We have a system with 17 nodes, with a big table about
hi, all,
We have a system with 17 nodes, with a big table about 28T in size. We use
native hbase bulkloader (importtsv) to load data, and it generated a lot of
hfiles, on each Region Server there are about 800 hfiles. We turned off Major
Compact, but the Minor compaction is running due to so
> By "controling the number of hfiles" I mean reducing the amount of data for
> each bulk load.
>
> If the regions for this table are not evenly distributed, some region
> server(s) may receive more data than the other servers.
>
> Cheers
>
> On Sat, Aug 26, 2017 at 7
Hi, all,
We have a hbase table which has 1 billion rows, and we want to randomly get 1M
from that table. We are now trying the RandomRowFilter, but it is still very
slow. If I understand it correctly, in the Server side, RandomRowFilter still
need to read all 1 billions but return randomly 1%
Thank you Yu , I am using a commercial distro of Hadoop/HBase, not able to
change the src code. So I have to use HBase/HDFS User API to achieve the goal.
Since there will be this feature, so I will temporally guess the Path and
invoke HDFS API directly. When HBase new release integrated with my
17 matches
Mail list logo