Congratulations!!
2016-11-30 8:32 GMT+08:00 Stephen Jiang :
> Congratulations, Phil!
>
> On Tue, Nov 29, 2016 at 2:42 PM, Andrew Purtell wrote:
>
>> Congratulations and welcome, Phil!
>>
>>
>> On Tue, Nov 29, 2016 at 1:49 AM, Duo Zhang
The performance looks great!
2016-11-19 18:03 GMT+08:00 Ted Yu :
> Opening a JIRA would be fine.
> This makes it easier for people to obtain the patch(es).
>
> Cheers
>
>> On Nov 18, 2016, at 11:35 PM, Anoop John wrote:
>>
>> Because of some
Congrats! :)
2016-10-16 8:19 GMT+08:00 Jerry He :
> Congratulations, Stephen.
>
> Jerry
>
> On Fri, Oct 14, 2016 at 12:56 PM, Dima Spivak wrote:
>
>> Congrats, Stephen!
>>
>> -Dima
>>
>> On Fri, Oct 14, 2016 at 11:27 AM, Enis Söztutar
Not sure hbase m7 is which version of hbase in community.
Is your batch load job some kind of bulk load or just call HTable API
to dump data to HBase?
2016-09-22 14:30 GMT+08:00 Dima Spivak :
> Hey Deepak,
>
> Assuming I understand your question, I think you'd be better
OH! congrats, DUO!
2016-09-07 12:26 GMT+08:00 Stack :
> On behalf of the Apache HBase PMC I am pleased to announce that 张铎
> has accepted our invitation to become a PMC member on the Apache
> HBase project. Duo has healthy notions on where the project should be
> headed and
AND heap
> -Xmx32G,-Xms32G, -Xmn4G
>
> Thanks,
> lujinhong
>
> > 在 2016年6月22日,15:53,Heng Chen <heng.chen.1...@gmail.com> 写道:
> >
> > How many regions do you have for the table? 8000 qps for one RS or for
> the
> > whole table? What's yo
hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
> On Wed, Jun 22, 20
How many regions do you have for the table? 8000 qps for one RS or for the
whole table? What's your java heap size now? and what's your hbase
version?
2016-06-22 12:39 GMT+08:00 jinhong lu :
> I got a cluster of 200 regionserver, and one of the tables is about 3T and
>
Could you paste the whole jstack and relates rs log? It seems row write
lock was occupied by some thread. Need more information to find it.
2016-06-22 13:48 GMT+08:00 vishnu rao :
> need some help. this has happened for 2 of my servers
> -
>
>
conf to
> false
>
>
>
>
>
>
>
>
> At 2016-06-16 18:18:44, "Heng Chen" <heng.chen.1...@gmail.com> wrote:
> >bq. if we do not set any user tables IN_MEMORY to true, then the whole
> >hbase just need to cache hbase:meta data to in_memory LruBlockCach
bq. if we do not set any user tables IN_MEMORY to true, then the whole
hbase just need to cache hbase:meta data to in_memory LruBlockCache.
You set blockcache to be false for other tables?
2016-06-16 16:21 GMT+08:00 WangYQ :
> in hbase 0.98.10, if we use
Can you please
> give us a little more of details? like which version of HBase are you
> using and a stack dump from the RS?
>
> cheers,
> esteban.
>
>
> --
> Cloudera, Inc.
>
>
> On Mon, Jun 13, 2016 at 8:35 PM, Heng Chen <heng.chen.1...@gmail.com>
>
Currently, we found sometimes our RS handlers were occupied by some big
request. For example, when handlers read one same big block from hdfs
simultaneously, all handlers will wait, except one handler do read block
from hdfs and put it in cache, and other handlers will read the block from
They have different default values, and according to contact of
HSTORE_OPEN_AND_CLOSE_THREADS_MAX, i should be OK. It just represents the
max limit threads in pool.
/**
* The default number for the max number of threads used for opening and
* closing stores or store files in parallel
*/
Something wrong in snappy Library?
Have you try to not use compression?
2016-06-03 11:13 GMT+08:00 吴国泉wgq :
> HI STACK:
>
>1. The log is very large,so I pick some of it. But it seems not
> provide valuable info.Here is the region named
>
+ " Prepared empty flush with seqId:" +
> flush.getFlushSequenceNumber());
>
> I searched for them in the log you attached to HBASE-15900 but didn't find
> any occurrence.
>
> FYI
>
> On Mon, May 30, 2016 at 2:59 AM, Heng Chen <heng.chen.1...@gmail.com>
bled before set writestate.flushing to be true.
So if region.close wake up in writestate.wait, but the lock acquried by
HRegion.replayWALFlushStartMarker, the flushing will be set to be true
again, and region.close will stuck in writestate.wait forever.
Will it happen in real logical?
2016-05-27 10:44
And there is another question about failed close state, does it mean the
region in this state could be read and write normally?
2016-05-26 12:48 GMT+08:00 Heng Chen <heng.chen.1...@gmail.com>:
>
> On master web UI, i could see region (c371fb20c372b8edbf54735409ab5c4a)
> always
On master web UI, i could see region (c371fb20c372b8edbf54735409ab5c4a)
always in failed close state, So balancer could not run.
i check the region on RS, and found logs about this region
2016-05-26 12:42:10,490 INFO [MemStoreFlusher.1]
regionserver.MemStoreFlusher: Waited 90447ms on a
In my company, we calculate UV/PV offline in batch, and update every day.
If do it online, url + timestamp could be the rowkey.
2016-05-16 18:13 GMT+08:00 齐忠 <cente...@gmail.com>:
> Yes, like google analytics.
>
> 2016-05-16 17:48 GMT+08:00 Heng Chen <heng.chen.1...@gmail.
You want to calculate UV/PV online?
2016-05-16 16:46 GMT+08:00 齐忠 :
> I have very large log(50T per day),
>
> My log event as follows
>
> url,visitid,requesttime
>
> http://www.aaa.com?a=b=d=f, 1, 1463387380
> http://www.aaa.com?a=b=d=fa, 1, 1463387280
>
That's great! We are ready to use SSD to improve read performance now.
2016-04-23 8:25 GMT+08:00 Stack :
> It is well worth the read. It goes deep so is a bit long and I had to cut
> it up to do Apache Blog sized bits. Start reading here:
>
BUG level.
>
> Was 9151f75eaa7d00a81e5001f4744b8b6a among the regions which didn't finish
> split ?
>
> Can you pastebin more of the master log during this period ?
>
> Any other symptom that you observed ?
>
> Cheers
>
> On Thu, Apr 7, 2016 at 12:59 AM, Heng Chen <heng.chen.1..
/apolo_pdf/9151f75eaa7d00a81e5001f4744b8b6a/m
2016-04-07 12:25:54,033 DEBUG [region-location-4]
regionserver.HRegionFileSystem: No StoreFiles for:
hdfs://hdfs-master:8020/hbase/data/default/apolo_pdf/9151f75eaa7d00a81e5001f4744b8b6a/m
2016-04-07 15:50 GMT+08:00 Heng Chen <heng.chen.1...@gmail.
hi, guys:
i upgrade our cluster recently, after upgrade, i found some wired
problems:
in Master.log, there some a lots of logs like below:
2016-04-07 11:57:00,597 DEBUG [region-location-0]
regionserver.HRegionFileSystem: No StoreFiles for:
OpenTSDB + 1:)
2016-03-23 11:49 GMT+08:00 Wojciech Indyk :
> Hi Prem!
> Look at OpenTSDB http://opentsdb.net/
> --
> Kind regards/ Pozdrawiam,
> Wojciech Indyk
> http://datacentric.pl
>
>
> 2016-03-07 11:26 GMT+01:00 Prem Yadav :
> > Hi,
> > we
bq. the table I created by default having only one region
Why not pre-split table into more regions when create it?
2016-03-16 11:38 GMT+08:00 Ted Yu :
> When one region is split into two, both daughter regions are opened on the
> same server where parent region was opened.
what is your HLogs File num during test, is it always the max number
(IIRC, default is 34?).
How many DNs in your hdfs?
2016-03-09 1:31 GMT+08:00 Frank Luo :
> 0.98
>
> "Light" means not enough to trigger compacts during actively write.
>
> -Original Message-
>
:41 GMT+08:00 Stack <st...@duboce.net>:
> On Wed, Feb 24, 2016 at 3:31 PM, Heng Chen <heng.chen.1...@gmail.com>
> wrote:
>
> > The story is I run one MR job on my production cluster (0.98.6), it
> needs
> > to scan one table during map procedure.
> >
&g
Thanks @ted, your suggestions about 2 and 3 are what i need !
2016-02-25 10:39 GMT+08:00 Heng Chen <heng.chen.1...@gmail.com>:
> I pick up some logs in master.log about one region
> "ad283942aff2bba6c0b94ff98a904d1a"
>
>
> 2016-02-24 16:24:35,610
u pastebin related server logs w.r.t. these two regions so that we
> can have more clue ?
>
> For #2, please see http://hbase.apache.org/book.html#big.cluster.config
>
> For #3, please see
>
> http://hbase.apache.org/book.html#_running_multiple_workloads_on_a_single_cluster
The story is I run one MR job on my production cluster (0.98.6), it needs
to scan one table during map procedure.
Because of the heavy load from the job, all my RS crashed due to OOM.
After i restart all RS, i found one problem.
All regions were reopened on one RS, and balancer could not
rt batch size is 50. You mean to say that 50 is also a
> lot?
> >
> > However, AsyncProcess is complaining about 2000 actions.
> >
> > I tried with upsert batch size of 5 also. But it didnt help.
> >
> >
> > On Sun, Feb 14, 2016 at 6:4
2016-02-14 12:34:23,593 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
It means your writes are too many, please decrease the batch size of your
puts, and balance your requests on each RS.
2016-02-15 4:53 GMT+08:00 anil gupta
I change phoenix lib from 4.6.0 to 4.5.1, logs come back...
2016-02-14 15:27 GMT+08:00 Heng Chen <heng.chen.1...@gmail.com>:
> I find some hints, the log seems to be disappear after i install
> phoenix, some suspicious logs below i found
>
> SLF4J: Class path conta
I am not sure why this happens, this is my command
maintain 11444 66.9 1.1 10386988 1485888 pts/0 Sl 12:33 6:30
/usr/java/jdk/bin/java -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9
%p -Xmx8000m -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCDateStamps
This happens after i upgrade my cluster from 0.98 to 1.1
2016-02-14 12:47 GMT+08:00 Heng Chen <heng.chen.1...@gmail.com>:
> I am not sure why this happens, this is my command
>
> maintain 11444 66.9 1.1 10386988 1485888 pts/0 Sl 12:33 6:30
> /usr/java/jdk/bin/java -
didn't go through.
>
> Consider using third party image hosting site.
>
> Pastebinning server log would help.
>
> Cheers
>
> On Mon, Jan 11, 2016 at 6:28 PM, Heng Chen <heng.chen.1...@gmail.com>
> wrote:
>
> > [image: 内嵌图片 1]
> >
> >
> > HBASE
; Cheers
>
> On Mon, Jan 11, 2016 at 6:52 PM, Heng Chen <heng.chen.1...@gmail.com>
> wrote:
>
> > Some relates region log on RS
> >
> >
> > 2016-01-12 10:45:01,570 INFO
> > [PriorityRpcServer.handler=14,queue=0,port=16020]
> > reg
[image: 内嵌图片 1]
HBASE-1.1.1 hadoop-2.5.0
I want to recovery this regions, how? ask for help.
@tedyu, should we add something like 'list server table' to list all
regions in one table on some RS.
I found in my practice, it is always needed.
2015-12-04 4:48 GMT+08:00 Ted Yu :
> There is get_splits command but it only shows the splits.
>
> status 'detailed' would
So, maybe we can use 1212 + customerId as rowKey.
btw, what is 1212 used for?
2015-12-01 17:49 GMT+08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
> Hi chen,
>
> yes I have customerid column to represent each customers
>
>
>
> On Tue, Dec 1, 2015 at 3:11
Yeah, if you want to get all records about 1212, just scan rows with
prefix 1212
2015-12-01 16:27 GMT+08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
> so you want me to design row-key value by appending name column value to
> the rowkey
>
> On Tue, Dec 1, 2015 at
why not
1212 | 10, 11, 12, 13, 14, 15, 16, 27, 28 ?
2015-12-01 14:29 GMT+08:00 Rajeshkumar J :
> Hi Ted,
>
> This is my use case. I have to store values like this is it possible?
>
> RowKey | Values
>
> 1212 | 10,11,12
>
> 1212 | 13, 14, 15
>
> 1212 |
20
> 1212 | | 21
> 1212 | | 22
>
> On Tue, Dec 1, 2015 at 12:03 PM, Heng Chen <heng.chen.1...@gmail.com>
> wrote:
>
> > why not
> >
> > 1212 | 10, 11, 12, 13, 14, 15, 16, 27, 28 ?
> >
> > 2015-12-01 14:29 GMT+08:00 Rajeshkumar J <
org.apache.pig.backend.hadoop.hbase.HBaseStorage is in pig project.
*ERROR:pig script failed to validate: java.lang.RuntimeException: could not
instantiate 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with
arguments.*
This message means the arguments is not correct.
Please check your
It cause regionserver down? Oh, could you post some regionserver logs?
2015-11-18 16:22 GMT+08:00 聪聪 <175998...@qq.com>:
> We recently found that regionserver down.Later, we found that because the
> client and server version is not compatible.The client version is
> 1.0,server version is
Oh, it will never happen.
Each put will acquire row lock to guarantee consistency.
2015-11-17 16:20 GMT+08:00 hongbin ma :
> i found a good article
> https://blogs.apache.org/hbase/entry/apache_hbase_internals_locking_and
> which seems to have answered my question.
>
> so
How about this way?
rowkey:Parent1, cf-children: col1-key: Child1Name, col1-value: Child1
Information
col2-key: Child2Name ,
col2-value: child2 information
..
You can get one child information
49 matches
Mail list logo