Re: java.io.IOException: Added a key not lexically larger than previous

2019-08-26 Thread Alexander Batyrshin
Looks like problem begun with 4.14.2
Maybe somehow https://issues.apache.org/jira/browse/PHOENIX-5266 
 can re-apply mutations to 
main table with bugs?

> On 20 Jun 2019, at 04:16, Alexander Batyrshin <0x62...@gmail.com> wrote:
> 
> Hello,
> Are there any ideas where this problem comes from and how to fix?
> 
> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,348 WARN  
> [MemStoreFlusher.0] regionserver.HStore: Failed flushing store file, retrying 
> num=9
> Jun 18 21:38:05 prod022 hbase[148581]: java.io.IOException: Added a key not 
> lexically larger than previous. Current cell = 
> \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, 
> lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> java.lang.Thread.run(Thread.java:748)
> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL 
> [MemStoreFlusher.0] regionserver.HRegionServer: ABORTING region server 
> prod022,60020,1560521871613: Replay of WAL required. Forcing server shutdown
> Jun 18 21:38:05 prod022 hbase[148581]: 
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> TBL_C,\x0D04606203096428+jaVbx.,1558885224779.b4633aee06956663b05e8322ce34b0a3.
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2675)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Jun 18 21:38:05 prod022 hbase[148581]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Jun 18 21:38:05 prod022 

Re: On duplicate key update

2019-08-26 Thread Josh Elser
Out of the box, Phoenix will provide the same semantics that HBase does 
for concurrent updates to a (data) table.


https://hbase.apache.org/acid-semantics.html

If you're also asking about how index tables remain in sync, the answer 
is a bit more complicated (and has changed in recent versions).


On 8/26/19 2:51 PM, Jon Strayer wrote:
How does atomic update work with multiple clients?  Assuming that there 
is no matching record to begin with the access won’t be locked.  It 
seems like two threads could write conflicting data since they both see 
no existing record (NER).  Is that correct? Or is there something that 
will serialize the writes so that only one of them sees the NER state?




On duplicate key update

2019-08-26 Thread Jon Strayer
How does atomic update work with multiple clients?  Assuming that there is no 
matching record to begin with the access won’t be locked.  It seems like two 
threads could write conflicting data since they both see no existing record 
(NER).  Is that correct? Or is there something that will serialize the writes 
so that only one of them sees the NER state?


Blog post on load balancing Phoenix Query Server with sticky sessions

2019-08-26 Thread Dushyant Dixit
Hello,

We have published a blog post on how we have achieved load balancing for
Apache Phoenix Query Server with sticky sessions.

https://medium.com/helpshift-engineering/smart-sticky-sessions-using-haproxy-for-apache-phoenix-911bdca7e2c