Re: Bloom for SEEK_NEXT_USING_HINT (SKIP_SCAN in Phoenix)

2020-07-25 Thread Alexander Batyrshin
Looks like HBase-1.4 uses ROWCOL bloom filter and not ROW on requestSeek(). Any ideas why it not uses ROW bloom? > On 28 Jun 2020, at 14:37, Alexander Batyrshin <0x62...@gmail.com> wrote: > > Hello all, > Please tell me is bloom filters used for SCAN with SEEK_NEXT_USIN

Bloom for SEEK_NEXT_USING_HINT (SKIP_SCAN in Phoenix)

2020-06-28 Thread Alexander Batyrshin
Hello all, Please tell me is bloom filters used for SCAN with SEEK_NEXT_USING_HINT in HBase-1.4 branch? If yes - do you have any public benchmarks?

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-14 Thread Alexander Batyrshin
> On 14 May 2020, at 20:21, Bharath Vissapragada wrote: > >> Maybe TS corruption issue some how linked with another issue that we got > - https://issues.apache.org/jira/browse/HBASE-22862 > > > We are running into this too. Our current

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-14 Thread Alexander Batyrshin
can to make sure the delete marker is inserted and nothing funky is >> happening --- >> >> hbase(main):003:0> scan 't1', {RAW=>true} >> ROW COLUMN+CELL >> >> >> row1col

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-14 Thread Alexander Batyrshin
:0> scan 't1', {RAW=>true} > ROW COLUMN+CELL > > > row1column=f:a, > timestamp=9223372036854775807, type=Delete > > row1column=f:a, > timestamp=9223372036854775807, value=val >

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Alexander Batyrshin
.4.10? > > > Em ter., 12 de mai. de 2020 às 13:49, Alexander Batyrshin <0x62...@gmail.com > <mailto:0x62...@gmail.com>> > escreveu: > >> Any ideas how to delete these rows? >> >> I see only this way: >> - backup data from region that contain

Re: How to delete row with Long.MAX_VALUE timestamp

2020-05-12 Thread Alexander Batyrshin
Any ideas how to delete these rows? I see only this way: - backup data from region that contains “damaged” rows - close region - remove region files from HDFS - assign region - copy needed rows from backup to recreated region > On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gma

Re: How to delete row with Long.MAX_VALUE timestamp

2020-04-30 Thread Alexander Batyrshin
Can you afford delete the whole CF for those rows? > > Em qua., 29 de abr. de 2020 às 14:41, junhyeok park > escreveu: > >> I've been through the same thing. I use 2.2.0 >> >> 2020년 4월 29일 (수) 오후 10:32, Alexander Batyrshin <0x62...@gmail.com>님이 작성: >>

Re: How to delete row with Long.MAX_VALUE timestamp

2020-04-29 Thread Alexander Batyrshin
apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98 > > Em qua., 29 de abr. de 2020 às 08:57, Alexander Batyrshin > <0x62...@gmail.com > escreveu: > > Hello all, > We had faced with strange situation: table has rows

Re: How to delete row with Long.MAX_VALUE timestamp

2020-04-29 Thread Alexander Batyrshin
t; wouldn't filter it. > > [1] https://hbase.apache.org/book.html#version.delete > [2] > https://github.com/apache/hbase/blob/branch-1.4/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java#L98 > > Em qua., 29 de abr. de 2020 às 08:57, Alexander Batyrshin <0x62.

How to delete row with Long.MAX_VALUE timestamp

2020-04-29 Thread Alexander Batyrshin
Hello all, We had faced with strange situation: table has rows with Long.MAX_VALUE timestamp. These rows impossible to delete, because DELETE mutation uses System.currentTimeMillis() timestamp. Is there any way to delete these rows? We use HBase-1.4.10 Example: hbase(main):037:0> scan

Re: Replication current progress at position -1

2019-11-08 Thread Alexander Batyrshin
https://issues.apache.org/jira/browse/HBASE-23205 (not merged yet) > > I hope this will be helpful to you. > > On Thu, Nov 7, 2019 at 12:30 AM Alexander Batyrshin <0x62...@gmail.com> > wrote: > >> Hello all, >> Sometimes we observer that replication

Replication current progress at position -1

2019-11-06 Thread Alexander Batyrshin
Hello all, Sometimes we observer that replication is not working at HBase-1.4.10 hbase07.prod.hbcluster: SOURCE: PeerID=lp_analytics, AgeOfLastShippedOp=0, SizeOfLogQueue=1, TimeStampsOfLastShippedOp=Thu Jan 01 03:00:00 MSK 1970, Replication Lag=1573052815347 SINK :

Merge with snapshots

2019-11-06 Thread Alexander Batyrshin
I stumble upon warning from Cloudera 6.x docs: Do not use merge in combination with snapshots. Merging two regions can cause data loss if snapshots or cloned tables exist for this table. The merge is likely to corrupt the snapshot and any tables cloned from the snapshot. If the table has been

Re: Snapshots TTL doesn't work in HBase-1.4.10

2019-11-04 Thread Alexander Batyrshin
On Mon, 4 Nov 2019 at 5:51 AM, Alexander Batyrshin <0x62...@gmail.com > <mailto:0x62...@gmail.com>> wrote: > Hello, > I observe that snapshots option TTL has no effect in HBase-1.4.10. > Our snapshots created every night: > > > snapshot ’VIRT_STORAGE', 'snapshot-

Snapshots TTL doesn't work in HBase-1.4.10

2019-11-03 Thread Alexander Batyrshin
Hello, I observe that snapshots option TTL has no effect in HBase-1.4.10. Our snapshots created every night: > snapshot ’VIRT_STORAGE', 'snapshot-VIRT_STORAGE-${SNAP_DATE}', {TTL => > 1209600} Right now I see: snapshot-VIRT_STORAGE-20191016 VIRT_STORAGE (Wed Oct

Re: ExportSnapshot to another cluster fails with: Error: java.io.FileNotFoundException: File does not exist: /hbase/archive/data/default/TABLE/...

2019-09-17 Thread Alexander Batyrshin
I’ve checked logs from HBase master and didn’t find any logs about cleaning this files > On 17 Sep 2019, at 02:05, 张铎(Duo Zhang) wrote: > > Try disabling the hfile cleaner at the destination cluster when exporting > the snapshot? > > Alexander Batyrshin <0x62...@gmail.co

Re: ExportSnapshot to another cluster fails with: Error: java.io.FileNotFoundException: File does not exist: /hbase/archive/data/default/TABLE/...

2019-09-16 Thread Alexander Batyrshin
Any ideas how to export snapshot? Looks like -mapper 1 fixes this issue, but I can’t confirm 100% because it tales too long > On 16 Sep 2019, at 11:45, Alexander Batyrshin <0x62...@gmail.com> wrote: > > HBase version 1.4.10 > > Export command: sudo -u hadoop

Re: ExportSnapshot to another cluster fails with: Error: java.io.FileNotFoundException: File does not exist: /hbase/archive/data/default/TABLE/...

2019-09-16 Thread Alexander Batyrshin
HBase version 1.4.10 Export command: sudo -u hadoop /opt/hbase/bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snapshot-TABLE -copy-to hdfs://hbase-cluster-2.datahouse/hbase > On 16 Sep 2019, at 11:32, Alexander Batyrshin <0x62...@gmail.com> wrote: > > I

Re: ExportSnapshot to another cluster fails with: Error: java.io.FileNotFoundException: File does not exist: /hbase/archive/data/default/TABLE/...

2019-09-16 Thread Alexander Batyrshin
I can’t find any logs from HBase Master cleaner about deleting this archive files. > On 16 Sep 2019, at 03:02, Alexander Batyrshin <0x62...@gmail.com> wrote: > > Looks like snapshot files somehow disappear at destination cluster > > Complete stack trace: > > 2

ExportSnapshot to another cluster fails with: Error: java.io.FileNotFoundException: File does not exist: /hbase/archive/data/default/TABLE/...

2019-09-15 Thread Alexander Batyrshin
Looks like snapshot files somehow disappear at destination cluster Complete stack trace: 2019-09-16 03:00:18,291 INFO [main] mapreduce.Job: Task Id : attempt_1563449245683_0180_m_43_2, Status : FAILED Error: java.io.FileNotFoundException: File does not exist:

Re: Region-server crash: Added a key not lexically larger than previous

2019-08-17 Thread Alexander Batyrshin
> S > > On Thu, Aug 15, 2019 at 8:57 AM Alexander Batyrshin <0x62...@gmail.com> > wrote: > >> >> Hello all, >> >> We observer error "Added a key not lexically larger than previous” that >> cause most of our region-servers to crash i

Re: Region-server crash: Added a key not lexically larger than previous

2019-08-15 Thread Alexander Batyrshin
Size ~ 4 000M, Store File Size ~ 1TB (FAST_DIFF) 23 columns in 1 column family > On 16 Aug 2019, at 04:14, Sean Kennedy wrote: > > Alex, > > How large is the table and how many columns? > > Thx > > Sean > > On Thursday, August 15, 2019, Alexander Ba

Region-server crash: Added a key not lexically larger than previous

2019-08-15 Thread Alexander Batyrshin
Hello all, We observer error "Added a key not lexically larger than previous” that cause most of our region-servers to crash in our cluster. HBase-1.4.10 2019-08-15 18:02:10,554 INFO [MemStoreFlusher.0] regionserver.HRegion: Flushing 1/1 column families, memstore=56.08 MB 2019-08-15

Re: hbase:meta not online

2019-08-14 Thread Alexander Batyrshin
te the hbase:meta location. (in 1.4.8) >> Let me have a check. >> >> On Wed, Aug 14, 2019 at 1:11 AM Alexander Batyrshin <0x62...@gmail.com> >> wrote: >> >>> Hello, >>> We’ve got problem with hbase:meta that cause several servers to die. >>&

hbase:meta not online

2019-08-13 Thread Alexander Batyrshin
Hello, We’ve got problem with hbase:meta that cause several servers to die. Our version is HBase-1.4.8 prod006 - flush hbase:meta and move it: Aug 13 15:46:23 prod006 hbase[63435]: 2019-08-13 15:46:23,019 INFO [prod006,60020,1564942121270_ChoreService_1] regionserver.HRegionServer:

Re: Speedup region transition

2019-08-08 Thread Alexander Batyrshin
RS is the part of a transition > that's going slow for you.) > > > On Thu, Aug 8, 2019 at 8:54 AM Alexander Batyrshin <0x62...@gmail.com> wrote: >> >> Hello all, >> Is there any way to make regions transition faster by configuration options?

Speedup region transition

2019-08-08 Thread Alexander Batyrshin
Hello all, Is there any way to make regions transition faster by configuration options?

Speedup region transition

2019-08-08 Thread Alexander Batyrshin
Hello all, Is there any way to make regions transition faster by configuration options?