how to get the last row inserted into a hbase table

2018-07-11 Thread Ming
purpose is to quickly check the last modified timestamp for a given HBase table. Thanks, Ming

RE: how to get the last row inserted into a hbase table

2018-07-11 Thread Ming
the biggest timestamp without a full scan, or at least scan the memstore, the last row must be in memstore. But there is no such API client can invoke. Let me think it again. thanks, Ming -Original Message- From: Josh Elser Sent: Thursday, July 12, 2018 12:55 AM To: user@hbase.apache.org Subject

RE: can we set a table to use a HDFS specific HSM Storage policy?

2018-04-17 Thread Ming
family. Each of the CF can have different policy. Pls see setStoragePolicy(String) API in HColumnDescriptor. -Anoop- On Tue, Apr 17, 2018 at 7:16 AM, Ming <ovis_p...@sina.com> wrote: > Hi, all, > > > > HDFS support HSM, one can set a file or dir storage policy to use diffe

RE: can we set a table to use a HDFS specific HSM Storage policy?

2018-04-17 Thread Ming
Hi, Anoop, In which release this API is supported? From the JIRA https://issues.apache.org/jira/browse/HBASE-14061, it seems this is only available in HBase 2.0? Thanks, Ming -Original Message- From: Anoop John <anoop.hb...@gmail.com> Sent: Tuesday, April 17, 2018 1:42 PM To

How to get the HDFS path for a given HBase table?

2018-04-20 Thread Ming
to get that Path string. Any help will be very appreciated! Thanks, Ming

RE: How to get the HDFS path for a given HBase table?

2018-04-20 Thread Ming
are using HBase 1.2.0, so I want to directly use HDFS API to set the storage policy for a given HBase Table, but I have to know its path. Ming -Original Message- From: Sean Busbey <bus...@apache.org> Sent: Friday, April 20, 2018 8:49 PM To: user@hbase.apache.org Subject: Re: How

can we set a table to use a HDFS specific HSM Storage policy?

2018-04-16 Thread Ming
manual, but not find related topics, so asked help here. Thanks, Ming

RE: can we set a table to use a HDFS specific HSM Storage policy?

2018-04-17 Thread Ming
Thank you Anoop for the answer, this is very helpful. Ming -Original Message- From: Anoop John <anoop.hb...@gmail.com> Sent: Wednesday, April 18, 2018 12:50 AM To: user@hbase.apache.org Subject: Re: can we set a table to use a HDFS specific HSM Storage policy? Oh ya seems ye

答复: hbase 'transparent encryption' feature is production ready or not?

2016-06-02 Thread Liu, Ming (Ming)
Thank you Andrew! What we hear must be rumor :-) We are now confident to use this feature. HSM is a good option, I am new to it. But will look at it. Thanks, Ming -邮件原件- 发件人: Andrew Purtell [mailto:apurt...@apache.org] 发送时间: 2016年6月3日 8:59 收件人: user@hbase.apache.org 抄送: Zhang, Yi

hbase 'transparent encryption' feature is production ready or not?

2016-06-02 Thread Liu, Ming (Ming)
about how to manage the key? Thanks, Ming

答复: hbase 'transparent encryption' feature is production ready or not?

2016-06-06 Thread Liu, Ming (Ming)
. The major goal of encryption for me is when the data is physically lost, one cannot read it if he cannot get the key. So unless the NFS and the data disk lost to same person, it is safe. But I should really start to read about HSM. Very appreciated of your help. Ming -邮件原件- 发件人: Andrew Purtell

Re: hbase 'transparent encryption' feature is production ready or not?

2016-06-06 Thread Liu, Ming (Ming)
there, is it an acceptable plan? Thanks, Ming -邮件原件- 发件人: Andrew Purtell [mailto:apurt...@apache.org] 发送时间: 2016年6月3日 12:27 收件人: user@hbase.apache.org 抄送: Zhang, Yi (Eason) <yi.zh...@esgyn.cn> 主题: Re: 答复: hbase 'transparent encryption' feature is production ready or not? > We are now confide

答复: what is a good way to bulkload large amount of data into HBase table

2016-02-06 Thread Liu, Ming (Ming)
. For now, 14 mins loading 135G raw data is not bad for me, about 600G/hr at a 10 nodes cluster. Not very good, but acceptable, and I am counting on the scalability of HBase and MapReduce :-) Thanks Ted for sharing the info. Ming -邮件原件- 发件人: Ted Yu [mailto:yuzhih...@gmail.com] 发送时间: 2016

what is a good way to bulkload large amount of data into HBase table

2016-02-06 Thread Liu, Ming (Ming)
some better ideas to do bulkload in HBase? or importtsv is already the best tool to do bulkload in HBase world? If I have real big-data (Say > 50T), this seems not a practical loading speed, isn't it? Or it is ? In practice, how people load data into HBase normally? Thanks in advance, Ming

答复: what is a good way to bulkload large amount of data into HBase table

2016-02-06 Thread Liu, Ming (Ming)
he/she did it, and what is the average loading speed. As a developer, I don't have any real project experience, just do my experiment in our lab. It looks too slow for me, but maybe that is a normal loading speed... So I want to hear from experts in this community. Thanks, Ming -邮件原件

答复: Major compaction

2016-04-04 Thread Liu, Ming (Ming)
Thanks Frank, this is something I am looking for. Would like to have a try with it. Thanks, Ming -邮件原件- 发件人: Frank Luo [mailto:j...@merkleinc.com] 发送时间: 2016年4月5日 1:38 收件人: user@hbase.apache.org 抄送: Sumit Nigam <sumit_o...@yahoo.com> 主题: RE: Major compaction I wrote a small p

RE: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-10 Thread Liu, Ming (Ming)
example - prepare() and process(). On Tue, Aug 9, 2016 at 5:04 PM, Liu, Ming (Ming) <ming@esgyn.cn> wrote: > Thanks Ted for pointing out this. Can this TableLockManager be used > from a client? I am fine to migrate if this API change for each release. > I am writing a cli

RE: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-09 Thread Liu, Ming (Ming)
the tablename and invoke getLock(), it check the row 0 value in an atomic check and put operation. So if the 'table lock' is free, anyone should be able to get it I think. Maybe I have to study the Zookeeper's distributed lock recipes? Thanks, Ming -Original Message- From: Ted Yu

RE: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-09 Thread Liu, Ming (Ming)
Thanks Ted for pointing out this. Can this TableLockManager be used from a client? I am fine to migrate if this API change for each release. I am writing a client application, and need to lock a hbase table, if this can be used directly, that will be super great! Thanks, Ming -Original

help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-09 Thread Liu, Ming (Ming)
implementation, so I fear there are basic problems in this 'design'. So please help me to review if there are any big issues about this idea? Any help will be very appreciated. Thanks a lot, Ming

RE: [Help] minor compact is continuously consuming the disk space until run out of space?

2017-08-26 Thread Liu, Ming (Ming)
understand why it is using so many extra disk space? Anything wrong in our system? thanks, Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Saturday, August 26, 2017 9:54 PM To: user@hbase.apache.org Subject: Re: [Help] minor compact is continuously consuming the disk

[Help] minor compact is continuously consuming the disk space until run out of space?

2017-08-26 Thread Liu, Ming (Ming)
, but the minor compaction is still triggered. The system is CDH 5.7, HBase is 1.2. Could anyone help to give us some suggestions? We are really stuck. Thanks in advance. Thanks, Ming -Original Message- From: Andrzej [mailto:borucki_andr...@wp.pl] Sent: Friday, August 25, 2017 11:55 PM

RE: [Help] minor compact is continuously consuming the disk space until run out of space?

2017-08-28 Thread Liu, Ming (Ming)
Thank you Anoop, Is there any rule that we can use to calculate the disk space usage for a given table to do minor compaction? Thanks, Ming -Original Message- From: Anoop John [mailto:anoop.hb...@gmail.com] Sent: Monday, August 28, 2017 2:22 PM To: user@hbase.apache.org Subject: Re

how to get random rows from a big hbase table faster

2018-04-12 Thread Liu, Ming (Ming)
% for them. But read 1 billion rows is very slow. Is this true? So is there any other better way to randomly get 1% rows from a given table? Any idea will be very appreciated. We don't know the distribution of the 1 billion rows in advance. Thanks, Ming

RE: How to get the HDFS path for a given HBase table?

2018-04-21 Thread Liu, Ming (Ming)
used distro, I can change my code to use new HBase API, which is much more elegant. Thanks, Ming -Original Message- From: Yu Li <car...@gmail.com> Sent: Saturday, April 21, 2018 6:15 PM To: Hbase-User <user@hbase.apache.org> Subject: Re: How to get the HDFS path for a given

0.92.0 availability

2011-06-08 Thread Ma, Ming
Hi, Where can I find the targeted release date of 0.92.0? Thanks. Ming

RE: Does Put support don't put if row exists?

2011-06-09 Thread Ma, Ming
It looks like there is a HBase API called checkAndPut. By setting the value to be null, you can achieve put only when the row+column family+column qualifier doesn't exist. Nice feature. _ From: Ma, Ming Sent: Wednesday, June 08, 2011 9:54 PM To: user

RE: Number of map jobs per region

2011-08-28 Thread Ma, Ming
. Ming -Original Message- From: Dhaval Makawana [mailto:dhaval.makaw...@gmail.com] Sent: Sunday, August 28, 2011 2:06 AM To: user@hbase.apache.org Subject: Number of map jobs per region Hi, We have 31 regions for a table in our HBase system and hence while scanning the table via

(BUG)Failed to read block error when enable replication

2016-08-02 Thread Ming Yang
When enabled replication,we found a large number of error logs.Is the cluster configuration incorrect? 2016-08-03 10:46:21,721 DEBUG org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670 2016-08-03

(BUG)ShortCircuitLocalReads Failed when enabled replication

2016-08-11 Thread Ming Yang
The cluster enabled shortCircuitLocalReads. dfs.client.read.shortcircuit true When enabled replication,we found a large number of error logs. 1.shortCircuitLocalReads(fail everytime). 2.Try reading via the datanode on targetAddr(success). How to make shortCircuitLocalReads successfully

Why hbase need manual split?

2014-08-06 Thread Liu, Ming (HPIT-GADSC)
. Regards, Ming

RE: Why hbase need manual split?

2014-08-06 Thread Liu, Ming (HPIT-GADSC)
split. Is this true? Can HBase do split in other ways? Thanks, Ming -Original Message- From: john guthrie [mailto:graf...@gmail.com] Sent: Wednesday, August 06, 2014 6:01 PM To: user@hbase.apache.org Subject: Re: Why hbase need manual split? i had a customer with a sequence-based key (yes

RE: Why hbase need manual split?

2014-08-06 Thread Liu, Ming (HPIT-GADSC)
split in middle of key range? Or there exist other algorithm here. Any help will be very appreciated! Best Regards, Ming -Original Message- From: john guthrie [mailto:graf...@gmail.com] Sent: Wednesday, August 06, 2014 6:35 PM To: user@hbase.apache.org Subject: Re: Why hbase need manual split

when will hbase create the zookeeper znode 'root-region-server’ is created? Hbase 0.94

2014-10-16 Thread Liu, Ming (HPIT-GADSC)
help me if you have any idea. Thanks very much in advance. Thanks, Ming

RE: when will hbase create the zookeeper znode 'root-region-server’ is created? Hbase 0.94

2014-10-16 Thread Liu, Ming (HPIT-GADSC)
and 0.94.24 as well. Thank you all for the help. Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Thursday, October 16, 2014 10:29 PM To: user@hbase.apache.org Subject: Re: when will hbase create the zookeeper znode 'root-region-server’ is created? Hbase 0.94 Ming

is there a HBase 0.98 hdfs directory structure introduction?

2014-11-02 Thread Liu, Ming (HPIT-GADSC)
to briefly introduce the new directory structure or give me a link? It will be good to know what each directory is for. Thanks, Ming

RE: is there a HBase 0.98 hdfs directory structure introduction?

2014-11-05 Thread Liu, Ming (HPIT-GADSC)
would find your table under the following directory: $roodir/{namespace}/table If you don't specify namespace at table creation time, 'default' namespace would be used. Cheers On Sun, Nov 2, 2014 at 7:16 PM, Liu, Ming (HPIT-GADSC) ming.l...@hp.com wrote: Hi, all, I have a program to calculate

Is it possible that HBase update performance is much better than read in YCSB test?

2014-11-11 Thread Liu, Ming (HPIT-GADSC)
=com.yahoo.ycsb.workloads.CoreWorkload readallfields=true readproportion=0 updateproportion=1 scanproportion=0 insertproportion=0 requestdistribution=zipfian Thanks, Ming

RE: Is it possible that HBase update performance is much better than read in YCSB test?

2014-11-12 Thread Liu, Ming (HPIT-GADSC)
Thank you Andrew, this is an excellent answer, I get it now. I will try your hbase client for a 'fair' test :-) Best Regards, Ming -Original Message- From: Andrew Purtell [mailto:apurt...@apache.org] Sent: Thursday, November 13, 2014 2:08 AM To: user@hbase.apache.org Cc: DeRoo, John

how to explain read/write performance change after modifying the hfile.block.cache.size?

2014-11-20 Thread Liu, Ming (HPIT-GADSC)
operation or it is possible that write to WAL is still buffered somewhere when hbase put the data into the memstore? Reading src code may cost me months, so a kindly reply will help me a lot... ... Thanks very much! Best Regards, Ming

RE: how to explain read/write performance change after modifying the hfile.block.cache.size?

2014-11-20 Thread Liu, Ming (HPIT-GADSC)
Thank you Ted, It is a great explanation. You are always very helpful ^_^ I will study the link carefully. Thanks, Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Friday, November 21, 2014 1:32 AM To: user@hbase.apache.org Subject: Re: how to explain read/write

RE: how to explain read/write performance change after modifying the hfile.block.cache.size?

2014-11-20 Thread Liu, Ming (HPIT-GADSC)
: 32K L2 cache: 256K L3 cache: 12288K NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s): 1,3,5,7,9,11 Thanks, Ming -Original Message- From: lars hofhansl [mailto:la...@apache.org] Sent: Friday, November 21, 2014 4:31 AM To: user@hbase.apache.org Subject: Re

RE: how to explain read/write performance change after modifying the hfile.block.cache.size?

2014-11-22 Thread Liu, Ming (HPIT-GADSC)
at least make sensor even on a shared env. The heap configuration is something I really need to check , thank you. Best Regards, Ming -Original Message- From: Nick Dimiduk [mailto:ndimi...@gmail.com] Sent: Saturday, November 22, 2014 5:57 AM To: user@hbase.apache.org Cc: lars hofhansl Subject

RE: What is proper way to make a hbase connection? using HTable (conf,tbl) or createConnection? Zookeeper session run out.

2014-11-24 Thread Liu, Ming (HPIT-GADSC)
Thank you Bharath, This is a very helpful reply! I will share the connection between two threads. Simply put, HTable is not safe for multi-thread, is this true? In multi-threads, one must use HConnectionManager. Thanks, Ming -Original Message- From: Bharath Vissapragada [mailto:bhara

RE: YCSB load failed because hbase region too busy

2014-11-25 Thread Liu, Ming (HPIT-GADSC)
to know how you solve the issue later. And as Ram and Qiang,Tian mentioned, you can only 'alleviate' the issue by increasing the knob but if you give hbase too much pressure, it will not work well sooner or later. Everyone has its own limitation :-) Thanks, Ming -Original Message- From

how to tell there is a OOM in regionserver

2014-12-01 Thread Liu, Ming (HPIT-GADSC)
there is a OOM issue in HBase region server? Thank you, Ming

RE: how to tell there is a OOM in regionserver

2014-12-01 Thread Liu, Ming (HPIT-GADSC)
Thank you both! Yes, I can see there is the '.out' file with clear proof of process was 'killed'. So we can prove this issue now! And it is also true that we must rely on JVM itself for proof that the kill operation is due to OOM. Thank you both, this is a very good learning. Thanks, Ming

Given a Put object, is there any way to change the timestamp of it?

2015-01-20 Thread Liu, Ming (HPIT-GADSC)
() for it. Is there any reason it is not provided? I hope in future release Mutation can expose a setTimestamp() method, is it possible? If so, my job will get much easier... Thanks, Ming

RE: Given a Put object, is there any way to change the timestamp of it?

2015-01-21 Thread Liu, Ming (HPIT-GADSC)
Thanks Ted! This is exactly what I need. This will be a memory copy, but it solves my problem. Hope HBase can provide a setTimeStamp() method in future release. Best Regards, Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Wednesday, January 21, 2015 11:30 AM

RE: managing HConnection

2015-02-16 Thread Liu, Ming (HPIT-GADSC)
? Also as David Chen asked, if all threads share same HConnection, it may has limitation to support high throughput, so a pool of Connections maybe better? Thanks, Ming -Original Message- From: Serega Sheypak [mailto:serega.shey...@gmail.com] Sent: Wednesday, February 04, 2015 1:02 AM

Durability of in-memory column family

2015-01-05 Thread Liu, Ming (HPIT-GADSC)
? Or in other words, is the data in an in-memory CF as safe as in an ordinary CF? No difference? I could do test myself, but it needs some time, so I would like to be lazy and ask for help here :) If someone happened to know the answer, thanks in advance! Thanks, Ming

HTable or HConnectionManager, how a client connect to HBase?

2015-02-14 Thread Liu, Ming (HPIT-GADSC)
more and more Please someone kindly help me for this newbie question and thanks in advance. Thanks, Ming

RE: HTable or HConnectionManager, how a client connect to HBase?

2015-02-23 Thread Liu, Ming (HPIT-GADSC)
Thanks, Enis, Your reply is very clear, I finally understand it now. Best Regards, Ming -Original Message- From: Enis Söztutar [mailto:enis@gmail.com] Sent: Thursday, February 19, 2015 10:41 AM To: hbase-user Subject: Re: HTable or HConnectionManager, how a client connect to HBase

RE: how to use RegionCoprocessorEnvironment getSharedData() to share data among coprocessors?

2015-04-15 Thread Liu, Ming (HPIT-GADSC)
on ZooKeeper too much, no idea why. Is there any other good way I can use to share data among different coprocessors? Thanks, Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Wednesday, April 15, 2015 8:25 PM To: user@hbase.apache.org Subject: Re: how to use

RE: how to use RegionCoprocessorEnvironment getSharedData() to share data among coprocessors?

2015-04-15 Thread Liu, Ming (HPIT-GADSC)
to further understand and feedback here if I can have some more findings or any other good method to shared data in a simple way. Thanks, Ming -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Thursday, April 16, 2015 5:16 AM To: user@hbase.apache.org Subject: Re: how

how to use RegionCoprocessorEnvironment getSharedData() to share data among coprocessors?

2015-04-14 Thread Liu, Ming (HPIT-GADSC)
confirmed by hbase shell status 'detailed' command. There is not much example I can find about how to use getSharedData(), could someone help me here? What is missing in my simple code? Thanks very much in advance! Thanks, Ming

What does Apache HBase do?

2022-05-18 Thread Turritopsis Dohrnii Teo En Ming
Subject: What does Apache HBase do? Good day from Singapore, I notice that my company/organization uses Apache HBase. What does it do? Just being curious. Regards, Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore 18 May 2022 Wed