Re: Incremental snapshot export

2018-05-29 Thread Ted Yu
See previous reply.

HBASE-14123 has been integrated to master branch which corresponds to hbase
3.0

FYI

On Tue, May 29, 2018 at 9:02 PM, Manjeet Singh 
wrote:

> Hi All
>
> Does incremental snapshot export from source cluster to destination cluster
> is available..?
> Can anyone suggest me the best way to export/import entire table from
> source cluster to destination cluster (its live system)
>
> Thanks
> Manjeet singh
>
> On Thu, 19 Jan 2017, 06:31 Neelesh,  wrote:
>
> > Thanks Ted!
> >
> > On Wed, Jan 18, 2017 at 9:11 AM, Ted Yu  wrote:
> >
> > > Currently ExportSnapshot utility doesn't support incremental export.
> > > Here is the help message for overwrite:
> > >
> > > static final Option OVERWRITE = new Option(null, "overwrite",
> false,
> > >
> > > "Rewrite the snapshot manifest if already exists.");
> > >
> > > Managing dependencies across snapshots may not be trivial (considering
> > > region split / merge in between snapshots).
> > >
> > > If you are interested, you can watch HBASE-14123 where incremental
> > backup /
> > > restore has solved this problem.
> > >
> > > Cheers
> > >
> > > On Wed, Jan 18, 2017 at 9:04 AM, Neelesh  wrote:
> > >
> > > > Hi,
> > > >   Does the ExportSnapshot utility incrementally export HFiles ? From
> > the
> > > > code, it looks like I can specify an overwrite flag to delete and
> > > recreate
> > > > the output dir, but there is no control on individual HFiles.  This
> is
> > on
> > > > HBase 1.1.2.
> > > >
> > > > I was wondering how difficult would it be to extend ExportSnapshot to
> > > > optionally skip copying HFiles if it already exists, given that
> HFiles
> > in
> > > > the context of snapshots are immutable.
> > > >
> > > >
> > > > Thanks!
> > > > -neelesh
> > > >
> > >
> >
>


Re: Incremental snapshot export

2018-05-29 Thread Manjeet Singh
Hi All

Does incremental snapshot export from source cluster to destination cluster
is available..?
Can anyone suggest me the best way to export/import entire table from
source cluster to destination cluster (its live system)

Thanks
Manjeet singh

On Thu, 19 Jan 2017, 06:31 Neelesh,  wrote:

> Thanks Ted!
>
> On Wed, Jan 18, 2017 at 9:11 AM, Ted Yu  wrote:
>
> > Currently ExportSnapshot utility doesn't support incremental export.
> > Here is the help message for overwrite:
> >
> > static final Option OVERWRITE = new Option(null, "overwrite", false,
> >
> > "Rewrite the snapshot manifest if already exists.");
> >
> > Managing dependencies across snapshots may not be trivial (considering
> > region split / merge in between snapshots).
> >
> > If you are interested, you can watch HBASE-14123 where incremental
> backup /
> > restore has solved this problem.
> >
> > Cheers
> >
> > On Wed, Jan 18, 2017 at 9:04 AM, Neelesh  wrote:
> >
> > > Hi,
> > >   Does the ExportSnapshot utility incrementally export HFiles ? From
> the
> > > code, it looks like I can specify an overwrite flag to delete and
> > recreate
> > > the output dir, but there is no control on individual HFiles.  This is
> on
> > > HBase 1.1.2.
> > >
> > > I was wondering how difficult would it be to extend ExportSnapshot to
> > > optionally skip copying HFiles if it already exists, given that HFiles
> in
> > > the context of snapshots are immutable.
> > >
> > >
> > > Thanks!
> > > -neelesh
> > >
> >
>


Re: can not write to HBase

2018-05-29 Thread Kang Minwoo
I left a comment on JIRA.

( https://issues.apache.org/jira/browse/HBASE-15871 )

Best regards,
Minwoo Kang


보낸 사람: Sean Busbey 
보낸 날짜: 2018년 5월 29일 화요일 23:12
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase

On Tue, May 29, 2018 at 1:25 AM, Kang Minwoo  wrote:
> Yes, I use reverse scan at that time.
> The situation you have shared exactly matches our situation.
>
> Thank you for share Good material !!
>
> And I think HBASE-15871 should backport 1.2 branch.
>

Let's discuss suitability for backport of HBASE-15871 either on a
backport-specific JIRA or the dev@hbase mailing list.

(For context, I'm the release manager for the 1.2 release line.)


Re: AntsDB 18.05.27 is released with TPC-C benchmark and LGPL

2018-05-29 Thread Stack
On Mon, May 28, 2018 at 12:19 AM, Water Guo  wrote:

> 
>
> I also conducted a TPC-C benchmark using BenchmarkSQL 4.1.1 from
> PostgreSQL community. It shows that AntsDB can handle row lock and
> transaction commit/rollback very efficiently. Result is published at
> http://www.antsdb.com/?p=207.
>
> As always your feedback is welcome and please follow the project on GitHub
> at https://github.com/waterguo/antsdb


I love it.

Looking at the blog posting, would have been handy to have list of steps
running the TPC-C I want to try it -- would be cool having TPC-C as a
loading -- but I'm too lazy to dig around looking for how to load schema...
etc. I'll do it but maybe you can dump on tail of log your bash history
doing setup and running the suite.

Thank you,
S



>
>
> ~water
>


Looking for folks to test out HBase bindings for upcoming YCSB 0.14.0 release

2018-05-29 Thread Sean Busbey
Hi HBase!

The YCSB project is currently testing out release candidates for our 0.14.0 
release.

This release will contain a number of changes to our HBase related client 
options, namely the addition of bindings that use the HBase shaded client 
libraries specific to 1.2, 1.4, and 2.0.

If anyone has a spare 30 or so minutes, we could use help testing out things so 
that the HBase clients can stay in our "tested in a release and supported" 
category. For details on what's involved in testing and where to get the 
release candidate, please see the following issue:

https://github.com/brianfrankcooper/YCSB/issues/1117

-
busbey


Re: can not write to HBase

2018-05-29 Thread Sean Busbey
On Tue, May 29, 2018 at 1:25 AM, Kang Minwoo  wrote:
> Yes, I use reverse scan at that time.
> The situation you have shared exactly matches our situation.
>
> Thank you for share Good material !!
>
> And I think HBASE-15871 should backport 1.2 branch.
>

Let's discuss suitability for backport of HBASE-15871 either on a
backport-specific JIRA or the dev@hbase mailing list.

(For context, I'm the release manager for the 1.2 release line.)


Re: can not write to HBase

2018-05-29 Thread Kang Minwoo
Yes, I use reverse scan at that time.
The situation you have shared exactly matches our situation.

Thank you for share Good material !!

And I think HBASE-15871 should backport 1.2 branch.



(Yes, I am Korean.)

Best regards,
Minwoo Kang


보낸 사람: Jungdae Kim 
보낸 날짜: 2018년 5월 29일 화요일 11:21
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase

 Memstore flusher waits until all current store scanners'
operations(next()) are done.
I think you have to check why scaner.next() is too slow.

Does your applications(hbase client) use reverse scans?

If so, i think you are sufferiing the issue related to HBASE-15871(
https://issues.apache.org/jira/browse/HBASE-15871)
If you want more details about this issue or a memstore flusher, please
check the following link, notes written to Korean.(case #4: page 48 ~ 55)
(I think you are Korean. ^^)

 -
https://www.evernote.com/shard/s167/sh/39eb6b44-25e7-4e61-ad2a-a0d1b076c7d1/159db49e3e49b189

Best regards,
Jeongdae Kim


김정대 드림.


On Thu, May 24, 2018 at 1:22 PM, Kang Minwoo 
wrote:

> I have a same error on today.
> thread dump is here.
>
> 
>
> Thread 286 (MemStoreFlusher.1):
>   State: WAITING
>   Blocked count: 10704
>   Waited count: 10936
>   Waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@2afc16fd
>   Stack:
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(
> AbstractQueuedSynchronizer.java:870)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(
> AbstractQueuedSynchronizer.java:1199)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.
> lock(ReentrantLock.java:209)
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> org.apache.hadoop.hbase.regionserver.StoreScanner.
> updateReaders(StoreScanner.java:693)
> org.apache.hadoop.hbase.regionserver.HStore.
> notifyChangedReadersObservers(HStore.java:1093)
> org.apache.hadoop.hbase.regionserver.HStore.
> updateStorefiles(HStore.java:1072)
> org.apache.hadoop.hbase.regionserver.HStore.access$
> 700(HStore.java:118)
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(
> HStore.java:2310)
> org.apache.hadoop.hbase.regionserver.HRegion.
> internalFlushCacheAndCommit(HRegion.java:2386)
> org.apache.hadoop.hbase.regionserver.HRegion.
> internalFlushcache(HRegion.java:2108)
> org.apache.hadoop.hbase.regionserver.HRegion.
> internalFlushcache(HRegion.java:2070)
> org.apache.hadoop.hbase.regionserver.HRegion.
> flushcache(HRegion.java:1961)
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1887)
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.
> flushRegion(MemStoreFlusher.java:514)
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.
> flushRegion(MemStoreFlusher.java:475)
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.
> access$900(MemStoreFlusher.java:75)
>
>
> 
>
> I deleted many row these days.
> I think that affected.
>
> Best regards,
> Minwoo Kang
>
> 
> 보낸 사람: Kang Minwoo 
> 보낸 날짜: 2018년 5월 23일 수요일 16:53
> 받는 사람: Hbase-User
> 제목: Re: can not write to HBase
>
> In HRegion#internalFlushCacheAndCommit
> There is following code.
>
> synchronized (this) {
>   notifyAll(); // FindBugs NN_NAKED_NOTIFY
> }
>
> one question.
> Where is the lock acquired?
>
> Best regards,
> Minwoo Kang
>
> 
> 보낸 사람: Kang Minwoo 
> 보낸 날짜: 2018년 5월 23일 수요일 16:37
> 받는 사람: Hbase-User
> 제목: Re: can not write to HBase
>
> Next time, If I have a same problem. I will save the jstack of the RS.
> (I could not think of saving jstack this time.)
>
> I did not see any special logs.
> There was only a warn log that the HBase Scan was slow.
>
> Best regards,
> Minwoo Kang
>
> 
> 보낸 사람: Yu Li 
> 보낸 날짜: 2018년 5월 23일 수요일 15:53
> 받는 사람: Hbase-User
> 제목: Re: can not write to HBase
>
> Please save the jstack of the RS when slow flush is ongoing and confirm
> whether it stuck at the HDFS writing phase. If so, check log of the local
> DN co-located with the RegionServer to see whether any notice-able
> exceptions.
>
> Best Regards,
> Yu
>
> On 23 May 2018 at 14:19, Kang Minwoo  wrote:
>
> > @Duo Zhang
> > This means that you're writing too fast and memstore has reached its
> upper
> > limit. Is the flush and compaction fine at RS side?
> >
> > -> No, flush took very long time.
> > I attach code that took a long time to run. (about 30min)
> >
> > https://github.com/apache/hbase/blob/branch-1.2/hbase-
> > server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#
> > L2424-L2508
> >
> > Best regards,
> > Minwoo Kang
> >