Interesting behavior, I just tried it out on my local setup (master/HEAD)
out of curiosity to check if we can trick HBase into deleting this bad row
and the following worked for me. I don't know how you ended up with that
row though (bad bulk load? just guessing).
To have a table with the
Table is ~ 10TB SNAPPY data. I don’t have such a big time window on production
for re-inserting all data.
I don’t know how we got those cells. I can only assume that this is phoenix
and/or replaying from WAL after region server crash.
> On 12 May 2020, at 18:25, Wellington Chevreuil
> wrote:
How large is this table? Can you afford re-insert all current data on a
new, temp table? If so, you could write a mapreduce job that scans this
table and rewrite all its cells to this new, temp table. I had verified
that 1.4.10 does have the timestamp replacing logic here:
Any ideas how to delete these rows?
I see only this way:
- backup data from region that contains “damaged” rows
- close region
- remove region files from HDFS
- assign region
- copy needed rows from backup to recreated region
> On 30 Apr 2020, at 21:00, Alexander Batyrshin <0x62...@gmail.com>
Hi,
I will be presenting on Hbase to one of the major European banks this
Friday 15th May.
Does anyone have latest bullet points on new features of HBase so I can add
them to my presentation material.
Many thanks,
Dr Mich Talebzadeh
[image: image.png]
LinkedIn *
Hi,
I will be presenting on Hbase to one of the major European banks this
Friday 15th May.
Does anyone have latest bullet points on new features of HBase so I can add
them to my presentation material.
Many thanks,
Dr Mich Talebzadeh
[image: image.png]
LinkedIn *