Phoenix perform full scan and ignore covered global index

2018-12-23 Thread Batyrshin Alexander
Examples:

1. Ignoring indexes if "*" used for select even index include all columns from 
source table

0: jdbc:phoenix:127.0.0.1> explain select * from table where "p" = '123123123';
+---+-+++
| PLAN  
| EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
+---+-+++
| CLIENT 1608-CHUNK 237983037 ROWS 160746749821 BYTES PARALLEL 30-WAY FULL SCAN 
OVER table  | 160746749821| 237983037  | 1545484493647  |
| SERVER FILTER BY d."p" = '123123123'  
| 160746749821| 237983037  | 1545484493647  |
| CLIENT MERGE SORT 
| 160746749821| 237983037  | 1545484493647  |
+---+-+++
3 rows selected (0.05 seconds)


2. Indexes used if only 1 column selected

0: jdbc:phoenix:127.0.0.1> explain select "c" from table where "p" = 
'123123123';
+-+-+++
|   
 PLAN   
  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
+-+-+++
| CLIENT 30-CHUNK 3569628 ROWS 3145729398 BYTES PARALLEL 30-WAY RANGE SCAN OVER 
table_idx_p [0,'123123123'] - [29,'123123123']  | 3145729398  | 3569628 
   | 1545484508039  |
| SERVER FILTER BY FIRST KEY ONLY   

  | 3145729398  | 3569628| 1545484508039  |
| CLIENT MERGE SORT 

  | 3145729398  | 3569628| 1545484508039  |
+-+-+++
3 rows selected (0.038 seconds)


3.

0: jdbc:phoenix:127.0.0.1> explain select /*+ INDEX(table table_idx_p) */ * 
from table where "p" = '123123123';
+-+-+++
|   
 PLAN   
  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
+-+-+++
| CLIENT 1608-CHUNK 237983037 ROWS 160746749821 BYTES PARALLEL 30-WAY FULL SCAN 
OVER table  
  | 3145729398  | 3569628| 1545484508039  |
| CLIENT MERGE SORT 

  | 3145729398  | 3569628| 1545484508039  |
| SKIP-SCAN-JOIN TABLE 0

  | 3145729398  | 3569628| 1545484508039  |
| CLIENT 30-CHUNK 3569628 ROWS 3145729398 BYTES PARALLEL 30-WAY RANGE 
SCAN OVER table_idx_p [0,'123123123'] - [29,'123123123']  | 3145729398  | 
3569628| 1545484508039  |
| SERVER FILTER BY FIRST KEY ONLY   

  | 3145729398  | 3569628| 1545484508039  |
| CLIENT MERGE SORT 
 

Re: [ANNOUNCE] Apache Phoenix 4.14.1 released

2018-11-21 Thread Batyrshin Alexander
Looks like bin packed with some non standard options.

Extracting on Ubuntu-16.04 looks like this:
$ tar -xzf apache-phoenix-4.14.1-HBase-1.4-bin.tar.gz
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
...

> On 15 Nov 2018, at 00:39, Vincent Poon  wrote:
> 
> The Apache Phoenix team is pleased to announce the immediate availability
> of the 4.14.1 patch release. Apache Phoenix enables SQL-based OLTP and
> operational analytics for Apache Hadoop using Apache HBase as its backing
> store and providing integration with other projects in the Apache ecosystem
> such as Spark, Hive, Pig, Flume, and MapReduce.
> 
> This patch release has feature parity with supported HBase versions and 
> includes critical bug fixes for secondary indexes.
> 
> Download source and binaries here [1].
> 
> Thanks,
> Vincent (on behalf of the Apache Phoenix team)
> 
> [1] http://phoenix.apache.org/download.html 
> 



Re: Connection Pooling?

2018-10-20 Thread Batyrshin Alexander
Caching is thread starting quest, but at the end i have the same question about 
connection FAQ

> On 18 Oct 2018, at 21:06, Josh Elser  wrote:
> 
> Batyrshin, you asked about statement caching which is different than 
> connection pooling.
> 
> @JMS, yes, the FAQ is accurate (as is the majority of the rest of the 
> documentation ;))
> 
> On 10/18/18 1:14 PM, Batyrshin Alexander wrote:
>> I've already asked the same question in this thread - 
>> http://apache-phoenix-user-list.1124778.n5.nabble.com/Statements-caching-td4674.html
>>> On 18 Oct 2018, at 19:44, Jean-Marc Spaggiari >> <mailto:jean-m...@spaggiari.org>> wrote:
>>> 
>>> Hi,
>>> 
>>> Is this statement in the FAQ still valid?
>>> 
>>> "If Phoenix Connections are reused, it is possible that the underlying 
>>> HBase connection is not always left in a healthy state by the previous 
>>> user. It is better to create new Phoenix Connections to ensure that you 
>>> avoid any potential issues."
>>> https://phoenix.apache.org/faq.html#Should_I_pool_Phoenix_JDBC_Connections 
>>> <https://www.google.com/url?q=https://phoenix.apache.org/faq.html%23Should_I_pool_Phoenix_JDBC_Connections=D=hangouts=1539951968896000=AFQjCNFvuOw8CmnpCcpOimhpw_DKBPH7WQ>
>>> 
>>> Thanks,
>>> 
>>> JMS



Re: Connection Pooling?

2018-10-18 Thread Batyrshin Alexander
I've already asked the same question in this thread - 
http://apache-phoenix-user-list.1124778.n5.nabble.com/Statements-caching-td4674.html

> On 18 Oct 2018, at 19:44, Jean-Marc Spaggiari  wrote:
> 
> Hi,
> 
> Is this statement in the FAQ still valid?
> 
> "If Phoenix Connections are reused, it is possible that the underlying HBase 
> connection is not always left in a healthy state by the previous user. It is 
> better to create new Phoenix Connections to ensure that you avoid any 
> potential issues."
>  https://phoenix.apache.org/faq.html#Should_I_pool_Phoenix_JDBC_Connections 
> 
> 
> Thanks,
> 
> JMS



Re: Concurrent phoenix queries throw unable to create new native thread error

2018-10-12 Thread Batyrshin Alexander
Yes - phoenix.query.threadPoolSize is client side property.
And yes - I suggest to set it at hbase-site.xml and provide (via CLASSPATH) 
this config for client Phoenix JDBC driver

Yep, you shoud provide hbase-site.xml to client-side fat JDBC driver

> On 11 Oct 2018, at 16:37, Hemal Parekh  wrote:
> 
> I have already set phoenix.query.threadPoolSize in hbase-site.xml which 
> resides on server side. I think phoenix.query.threadPoolSize is a client side 
> property and so should be set from the client application making phoenix 
> connection. Are you suggesting that there should also be a copy of 
> hbase-site.xml on the client machine? 
> 
> On Thu, Oct 11, 2018 at 5:20 AM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Problem comes from that "phoenix.query.threadPoolSize" can not be set via 
> connection settings. You must set this props via hbase-site.xml.
> Looks like thread pool initialized before any connections inside JDBC driver.
> 
> I think that this moment must be clarified in documentation 
> http://phoenix.apache.org/tuning.html <http://phoenix.apache.org/tuning.html>
>  
> 
>> On 10 Oct 2018, at 18:59, Hemal Parekh > <mailto:he...@bitscopic.com>> wrote:
>> 
>> We have an analytical application running concurrent phoenix queries against 
>> Hortonworks HDP 2.6 cluster. Application uses phoenix JDBC connection to run 
>> queries. Often times, concurrent queries fail with 
>> "java.lang.OutOfMemoryError: unable to create new native thread" error. JDBC 
>> connection sets following phoenix properties. 
>> 
>> connectionProps.setProperty("phoenix.query.threadPoolSize", "2000")
>> connectionProps.setProperty("phoenix.query.querySize", "4")
>> 
>> Phoenix version is 4.7 and Hbase version is 1.1.2, The HDP cluster has six 
>> regionservers on six data nodes. Concurrent queries run against different 
>> phoenix tables, some are small having few million records and some are big 
>> having few billions records. Most of the queries do not have joins,  where 
>> clause includes conditions on rowkey and few nonkey columns. Queries with 
>> joins (which are on small tables) have used USE_SORT_MERGE_JOIN hint. 
>> 
>> Are there other phoenix properties which need to be set on JDBC connection? 
>> Are above values for phoenix.query.threadPoolSize and 
>> phoenix.query.querySize enough to handle concurrent query use case? We have 
>> changed these two properties couple of times to increase their values but 
>> the error still remains the same.
>> 
>> 
>> Thanks,
>> 
>> Hemal Parekh
>> 
>> 
>> 
> 
> 
> 
> -- 
> 
> Hemal Parekh
> Senior Data Warehouse Architect
> m. 240.449.4396
>  <http://bitscopic.com/>



Re: Concurrent phoenix queries throw unable to create new native thread error

2018-10-11 Thread Batyrshin Alexander
Problem comes from that "phoenix.query.threadPoolSize" can not be set via 
connection settings. You must set this props via hbase-site.xml.
Looks like thread pool initialized before any connections inside JDBC driver.

I think that this moment must be clarified in documentation 
http://phoenix.apache.org/tuning.html 
 

> On 10 Oct 2018, at 18:59, Hemal Parekh  wrote:
> 
> We have an analytical application running concurrent phoenix queries against 
> Hortonworks HDP 2.6 cluster. Application uses phoenix JDBC connection to run 
> queries. Often times, concurrent queries fail with 
> "java.lang.OutOfMemoryError: unable to create new native thread" error. JDBC 
> connection sets following phoenix properties. 
> 
> connectionProps.setProperty("phoenix.query.threadPoolSize", "2000")
> connectionProps.setProperty("phoenix.query.querySize", "4")
> 
> Phoenix version is 4.7 and Hbase version is 1.1.2, The HDP cluster has six 
> regionservers on six data nodes. Concurrent queries run against different 
> phoenix tables, some are small having few million records and some are big 
> having few billions records. Most of the queries do not have joins,  where 
> clause includes conditions on rowkey and few nonkey columns. Queries with 
> joins (which are on small tables) have used USE_SORT_MERGE_JOIN hint. 
> 
> Are there other phoenix properties which need to be set on JDBC connection? 
> Are above values for phoenix.query.threadPoolSize and phoenix.query.querySize 
> enough to handle concurrent query use case? We have changed these two 
> properties couple of times to increase their values but the error still 
> remains the same.
> 
> 
> Thanks,
> 
> Hemal Parekh
> 
> 
> 



Re: ON DUPLICATE KEY with Global Index

2018-10-10 Thread Batyrshin Alexander
Thank you. Now it's clear that documentation on web-site is outdated

> On 9 Oct 2018, at 23:12, Vincent Poon  wrote:
> 
> We do need to update the docs after PHOENIX-3925, which changed the behavior 
> from 'recommended' to 'mandatory'.
> I'll update the docs now.
> 
> On Tue, Oct 9, 2018 at 1:08 PM Ankit Singhal  <mailto:ankitsingha...@gmail.com>> wrote:
> We do not allow atomic upsert and throw the corresponding exception in the 
> cases documented under the limitations section of 
> http://phoenix.apache.org/atomic_upsert.html 
> <http://phoenix.apache.org/atomic_upsert.html>.  Probably a documentation 
> needs a little touch to convey this clearly.
> 
> On Tue, Oct 9, 2018 at 10:05 AM Josh Elser  <mailto:els...@apache.org>> wrote:
> Can you elaborate on what is unclear about the documentation? This 
> exception and the related documentation read as being in support of each 
> other to me.
> 
> On 10/9/18 5:39 AM, Batyrshin Alexander wrote:
> >   Hello all,
> > Documentations (http://phoenix.apache.org/atomic_upsert.html 
> > <http://phoenix.apache.org/atomic_upsert.html>) say:
> > 
> > "Although global indexes on columns being atomically updated are supported, 
> > it’s not recommended as a potentially a separate RPC across the wire would 
> > be made while the row is under lock to maintain the secondary index."
> > 
> > But in practice we get:
> > CANNOT_USE_ON_DUP_KEY_WITH_GLOBAL_IDX(1224, "42Z24", "The ON DUPLICATE KEY 
> > clause may not be used when a table has a global index." )
> > 
> > Is this bug or documentation is outdated?
> > 



ON DUPLICATE KEY with Global Index

2018-10-09 Thread Batyrshin Alexander
 Hello all,
Documentations (http://phoenix.apache.org/atomic_upsert.html) say:

"Although global indexes on columns being atomically updated are supported, 
it’s not recommended as a potentially a separate RPC across the wire would be 
made while the row is under lock to maintain the secondary index."

But in practice we get:
CANNOT_USE_ON_DUP_KEY_WITH_GLOBAL_IDX(1224, "42Z24", "The ON DUPLICATE KEY 
clause may not be used when a table has a global index." )

Is this bug or documentation is outdated?

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-10-09 Thread Batyrshin Alexander
I've created bug with reproduce steps: 
https://issues.apache.org/jira/browse/PHOENIX-4960

> On 3 Oct 2018, at 21:06, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> But we see that Phoenix commit() in our cases fails with "ERROR 1120 (XCL20): 
> Writes to table blocked until index can be updated" because of 
> org.apache.hadoop.hbase.NotServingRegionException.
> Expected that there must be retry and success for commit()
> 
>> On 2 Oct 2018, at 22:02, Josh Elser > <mailto:els...@apache.org>> wrote:
>> 
>> HBase will invalidate the location of a Region on seeing certain exceptions 
>> (including NotServingRegionException). After it sees the exception you have 
>> copied below, it should re-fetch the location of the Region.
>> 
>> If HBase keeps trying to access a Region on a RS that isn't hosting it, 
>> either hbase:meta is wrong or the HBase client has a bug.
>> 
>> However, to the point here, if that region was split successfully, clients 
>> should not be reading from that region anymore -- they would read from the 
>> daughters of that split region.
>> 
>> On 10/2/18 2:34 PM, Batyrshin Alexander wrote:
>>> We tried branch 4.14-HBase-1.4 at commit 
>>> https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8
>>>  
>>> <https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8>
>>> Is there any way to invalidate meta-cache on event of index regions split? 
>>> Maybe there is some option to set max time to live for cache?
>>> Watching this on regions servers:
>>> At 09:34 regions *96c3ede1c40c98959e60bd6fc0e07269* split on prod019
>>> Oct 02 09:34:39 prod019 hbase[152127]: 2018-10-02 09:34:39,719 INFO   
>>> [regionserver/prod019/10.0.0.19:60020-splits-1538462079117] 
>>> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
>>> master. Parent=IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x
>>> 01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.,
>>>  new regions: 
>>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1538462079161.80fc2516619d8665789b0c5a2bca8a8b.,
>>>  IDX_MARK_O,\x0BON_SCHFDOPPR_2AL-5602
>>> 2B7D-2F90-4AA5-8125-4F4001B5BE0D-0_2AL-C0D76C01-EE7E-496B-BCD6-F6488956F75A-0_20180228_7E372181-F23D-4EBE-9CAD-5F5218C9798I\x46186195_5.UHQ=\x00\x02\x80\x00\x01a\xD3\xEA@\x80\x00\x00\x00\x00,1538462079161.24b6675d9e51067a21e58f294a9f816b..
>>>  Split took 0sec
>>> Fail at 11:51 prod018
>>> Oct 02 11:51:13 prod018 hbase[108476]: 2018-10-02 11:51:13,752 WARN   
>>> [hconnection-0x4131af19-shared--pool24-t26652] client.AsyncProcess: #164, 
>>> table=IDX_MARK_O, attempt=1/1 failed=1ops, last exception: 
>>> org.apache.hadoop.hbase.NotServingRegionException: 
>>> org.apache.hadoop.hbase.NotServingRegionException: Region 
>>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.
>>>  is not online on prod019,60020,1538417663874
>>> Fail at 13:38 on prod005
>>> Oct 02 13:38:06 prod005 hbase[197079]: 2018-10-02 13:38:06,040 WARN   
>>> [hconnection-0x5e744e65-shared--pool8-t31214] client.AsyncProcess: #53, 
>>> table=IDX_MARK_O, attempt=1/1 failed=11ops, last exception: 
>>> org.apache.hadoop.hbase.NotServingRegionException: 
>>> org.apache.hadoop.hbase.NotServingRegionException: Region 
>>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.
>>>  is not online on prod019,60020,1538417663874
>>>> On 27 Sep 2018, at 01:04, Ankit Singhal >>> <mailto:ankitsingha...@gmail.com> <mailto:ankitsingha...@gmail.com 
>>>> <mailto:ankitsingha...@gmail.com>>> wrote:
>>>> 
>>>> You might be hitting PHOENIX-4785 
>>>> <https://jira.apache.org/jira/browse/PHOENIX-4785 
>>>> <https://jira.apache.org/jira/browse/PHOENIX-4785>>,  you can apply the 
>>>> patch on top of 4.14 and see if it fixes your problem.
>>>> 
>>>> Regards,
>>>> Ankit Singhal
>>>> 
>>>> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>>> wrote:
>>>> 
>>>>Any advices? Helps?
>>>>I can repr

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-10-03 Thread Batyrshin Alexander
But we see that Phoenix commit() in our cases fails with "ERROR 1120 (XCL20): 
Writes to table blocked until index can be updated" because of 
org.apache.hadoop.hbase.NotServingRegionException.
Expected that there must be retry and success for commit()

> On 2 Oct 2018, at 22:02, Josh Elser  wrote:
> 
> HBase will invalidate the location of a Region on seeing certain exceptions 
> (including NotServingRegionException). After it sees the exception you have 
> copied below, it should re-fetch the location of the Region.
> 
> If HBase keeps trying to access a Region on a RS that isn't hosting it, 
> either hbase:meta is wrong or the HBase client has a bug.
> 
> However, to the point here, if that region was split successfully, clients 
> should not be reading from that region anymore -- they would read from the 
> daughters of that split region.
> 
> On 10/2/18 2:34 PM, Batyrshin Alexander wrote:
>> We tried branch 4.14-HBase-1.4 at commit 
>> https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8
>> Is there any way to invalidate meta-cache on event of index regions split? 
>> Maybe there is some option to set max time to live for cache?
>> Watching this on regions servers:
>> At 09:34 regions *96c3ede1c40c98959e60bd6fc0e07269* split on prod019
>> Oct 02 09:34:39 prod019 hbase[152127]: 2018-10-02 09:34:39,719 INFO   
>> [regionserver/prod019/10.0.0.19:60020-splits-1538462079117] 
>> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
>> master. Parent=IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x
>> 01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.,
>>  new regions: 
>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1538462079161.80fc2516619d8665789b0c5a2bca8a8b.,
>>  IDX_MARK_O,\x0BON_SCHFDOPPR_2AL-5602
>> 2B7D-2F90-4AA5-8125-4F4001B5BE0D-0_2AL-C0D76C01-EE7E-496B-BCD6-F6488956F75A-0_20180228_7E372181-F23D-4EBE-9CAD-5F5218C9798I\x46186195_5.UHQ=\x00\x02\x80\x00\x01a\xD3\xEA@\x80\x00\x00\x00\x00,1538462079161.24b6675d9e51067a21e58f294a9f816b..
>>  Split took 0sec
>> Fail at 11:51 prod018
>> Oct 02 11:51:13 prod018 hbase[108476]: 2018-10-02 11:51:13,752 WARN   
>> [hconnection-0x4131af19-shared--pool24-t26652] client.AsyncProcess: #164, 
>> table=IDX_MARK_O, attempt=1/1 failed=1ops, last exception: 
>> org.apache.hadoop.hbase.NotServingRegionException: 
>> org.apache.hadoop.hbase.NotServingRegionException: Region 
>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.
>>  is not online on prod019,60020,1538417663874
>> Fail at 13:38 on prod005
>> Oct 02 13:38:06 prod005 hbase[197079]: 2018-10-02 13:38:06,040 WARN   
>> [hconnection-0x5e744e65-shared--pool8-t31214] client.AsyncProcess: #53, 
>> table=IDX_MARK_O, attempt=1/1 failed=11ops, last exception: 
>> org.apache.hadoop.hbase.NotServingRegionException: 
>> org.apache.hadoop.hbase.NotServingRegionException: Region 
>> IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.*96c3ede1c40c98959e60bd6fc0e07269*.
>>  is not online on prod019,60020,1538417663874
>>> On 27 Sep 2018, at 01:04, Ankit Singhal >> <mailto:ankitsingha...@gmail.com> <mailto:ankitsingha...@gmail.com 
>>> <mailto:ankitsingha...@gmail.com>>> wrote:
>>> 
>>> You might be hitting PHOENIX-4785 
>>> <https://jira.apache.org/jira/browse/PHOENIX-4785 
>>> <https://jira.apache.org/jira/browse/PHOENIX-4785>>,  you can apply the 
>>> patch on top of 4.14 and see if it fixes your problem.
>>> 
>>> Regards,
>>> Ankit Singhal
>>> 
>>> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>>> wrote:
>>> 
>>>Any advices? Helps?
>>>I can reproduce problem and capture more logs if needed.
>>> 
>>>>On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>
>>>><mailto:0x62...@gmail.com <mailto:0x62...@gmail.com>>> wrote:
>>>> 
>>>>Looks like lock goes away 30 minutes after index region split.
>>>>So i can assume that this issue comes from cache that configured
>>>>by this option:*phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs*
>>>> 
>>>> 
>>>

Re: ABORTING region server and following HBase cluster "crash"

2018-10-02 Thread Batyrshin Alexander
iled
Oct 02 03:25:12 prod002 hbase[195373]: java.io.IOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Attempt to disable KM_IDX1 
failed.
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:212)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:244)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:153)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:620)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:595)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:578)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3646)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3108)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3050)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2405)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2359)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Oct 02 03:25:12 prod002 hbase[195373]: Caused by: 
org.apache.hadoop.hbase.DoNotRetryIOException: Attempt to disable KM_IDX1 
failed.
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$2.run(PhoenixIndexFailurePolicy.java:280)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$2.run(PhoenixIndexFailurePolicy.java:244)
Oct 02 03:25:12 prod002 hbase[195373]: at 
java.security.AccessController.doPrivileged(Native Method)
Oct 02 03:25:12 prod002 hbase[195373]: at 
javax.security.auth.Subject.doAs(Subject.java:422)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
Oct 02 03:25:12 prod002 hbase[195373]: at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
Oct 02 03:25:12 prod002 hbase[195373]: at 
sun.reflect.GeneratedMethodAccessor160.invoke(Unknown Source)
Oct 02 03:25:12 prod002 hbase[195373]: at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 02 03:25:12 prod002 hbase[195373]: at 
java.lang.reflect.Method.invoke(Method.java:498)
Oct 02 03:25:12 prod002 hbase[195373]:     at 
org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
Oct 02 03:25:12 prod002 hba

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-10-02 Thread Batyrshin Alexander
We tried branch 4.14-HBase-1.4 at commit 
https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8
 
<https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8>

Is there any way to invalidate meta-cache on event of index regions split? 
Maybe there is some option to set max time to live for cache?

Watching this on regions servers:

At 09:34 regions 96c3ede1c40c98959e60bd6fc0e07269 split on prod019 

Oct 02 09:34:39 prod019 hbase[152127]: 2018-10-02 09:34:39,719 INFO  
[regionserver/prod019/10.0.0.19:60020-splits-1538462079117] 
regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
master. Parent=IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x
01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.96c3ede1c40c98959e60bd6fc0e07269.,
 new regions: 
IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1538462079161.80fc2516619d8665789b0c5a2bca8a8b.,
 IDX_MARK_O,\x0BON_SCHFDOPPR_2AL-5602
2B7D-2F90-4AA5-8125-4F4001B5BE0D-0_2AL-C0D76C01-EE7E-496B-BCD6-F6488956F75A-0_20180228_7E372181-F23D-4EBE-9CAD-5F5218C9798I\x46186195_5.UHQ=\x00\x02\x80\x00\x01a\xD3\xEA@\x80\x00\x00\x00\x00,1538462079161.24b6675d9e51067a21e58f294a9f816b..
 Split took 0sec

Fail at 11:51 prod018

Oct 02 11:51:13 prod018 hbase[108476]: 2018-10-02 11:51:13,752 WARN  
[hconnection-0x4131af19-shared--pool24-t26652] client.AsyncProcess: #164, 
table=IDX_MARK_O, attempt=1/1 failed=1ops, last exception: 
org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region 
IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.96c3ede1c40c98959e60bd6fc0e07269.
 is not online on prod019,60020,1538417663874

Fail at 13:38 on prod005

Oct 02 13:38:06 prod005 hbase[197079]: 2018-10-02 13:38:06,040 WARN  
[hconnection-0x5e744e65-shared--pool8-t31214] client.AsyncProcess: #53, 
table=IDX_MARK_O, attempt=1/1 failed=11ops, last exception: 
org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region 
IDX_MARK_O,\x0B\x46200020qC8kovh\x00\x01\x80\x00\x01e\x89\x8B\x99@\x00\x00\x00\x00,1537400033958.96c3ede1c40c98959e60bd6fc0e07269.
 is not online on prod019,60020,1538417663874

> On 27 Sep 2018, at 01:04, Ankit Singhal  wrote:
> 
> You might be hitting PHOENIX-4785 
> <https://jira.apache.org/jira/browse/PHOENIX-4785>,  you can apply the patch 
> on top of 4.14 and see if it fixes your problem.
> 
> Regards,
> Ankit Singhal
> 
> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Any advices? Helps?
> I can reproduce problem and capture more logs if needed.
> 
>> On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> 
>> Looks like lock goes away 30 minutes after index region split.
>> So i can assume that this issue comes from cache that configured by this 
>> option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
>> 
>> 
>> 
>>> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> 
>>> And how this split looks at Master logs:
>>> 
>>> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
>>> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
>>> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>> state=SPLITTING, ts=1537461904888, server=prod
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>> state=SPLIT, ts=1537461905340, server=pro
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
>>> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
>>> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
>>> state=OPEN, ts=1537461905341, server=
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
&

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-10-01 Thread Batyrshin Alexander
ETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 
'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s) in 0.3340 seconds

Per HBase config:

hbase.regionserver.region.split.policy

org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy

hbase-site.xml


hbase.hregion.max.filesize
10737418240
hbase-default.xml




> On 30 Sep 2018, at 06:09, Jaanai Zhang  wrote:
> 
> Did you restart the cluster and you should set 'hbase.hregion.max.filesize' 
> to a safeguard value which less than RS's capabilities.
> 
> ----
>Jaanai Zhang
>Best regards!
> 
> 
> 
> Batyrshin Alexander <0x62...@gmail.com <mailto:0x62...@gmail.com>> 
> 于2018年9月29日周六 下午5:28写道:
> Meanwhile we tried to disable regions split via per index table options
> 'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'  and 
> hbase.hregion.max.filesize = 10737418240
> Looks like this options set doesn't. Some regions splits at size < 2GB
> 
> Then we tried to disable all splits via hbase shell: splitormerge_switch 
> 'SPLIT', false
> Seems that this also doesn't work.
> 
> Any ideas why we can't disable regions split?
> 
>> On 27 Sep 2018, at 02:52, Vincent Poon > <mailto:vincent.poon...@gmail.com>> wrote:
>> 
>> We are planning a Phoenix 4.14.1 release which will have this fix
>> 
>> On Wed, Sep 26, 2018 at 3:36 PM Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> Thank you. We will try somehow...
>> Is there any chance that this fix will be included in next release for 
>> HBASE-1.4 (not 2.0)?
>> 
>>> On 27 Sep 2018, at 01:04, Ankit Singhal >> <mailto:ankitsingha...@gmail.com>> wrote:
>>> 
>>> You might be hitting PHOENIX-4785 
>>> <https://jira.apache.org/jira/browse/PHOENIX-4785>,  you can apply the 
>>> patch on top of 4.14 and see if it fixes your problem.
>>> 
>>> Regards,
>>> Ankit Singhal
>>> 
>>> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> Any advices? Helps?
>>> I can reproduce problem and capture more logs if needed.
>>> 
>>>> On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>> wrote:
>>>> 
>>>> Looks like lock goes away 30 minutes after index region split.
>>>> So i can assume that this issue comes from cache that configured by this 
>>>> option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
>>>> 
>>>> 
>>>> 
>>>>> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com 
>>>>> <mailto:0x62...@gmail.com>> wrote:
>>>>> 
>>>>> And how this split looks at Master logs:
>>>>> 
>>>>> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
>>>>> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
>>>>> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
>>>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>>>> state=SPLITTING, ts=1537461904888, server=prod
>>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>>> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
>>>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>>>> state=SPLIT, ts=1537461905340, server=pro
>>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
>>>>> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
>>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>>> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
>>>>> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
>>>>> state=OPEN, ts=1537461905341, server=
>>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>>> {acb

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-29 Thread Batyrshin Alexander
Meanwhile we tried to disable regions split via per index table options
'SPLIT_POLICY' => 
'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'  and 
hbase.hregion.max.filesize = 10737418240
Looks like this options set doesn't. Some regions splits at size < 2GB

Then we tried to disable all splits via hbase shell: splitormerge_switch 
'SPLIT', false
Seems that this also doesn't work.

Any ideas why we can't disable regions split?

> On 27 Sep 2018, at 02:52, Vincent Poon  wrote:
> 
> We are planning a Phoenix 4.14.1 release which will have this fix
> 
> On Wed, Sep 26, 2018 at 3:36 PM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Thank you. We will try somehow...
> Is there any chance that this fix will be included in next release for 
> HBASE-1.4 (not 2.0)?
> 
>> On 27 Sep 2018, at 01:04, Ankit Singhal > <mailto:ankitsingha...@gmail.com>> wrote:
>> 
>> You might be hitting PHOENIX-4785 
>> <https://jira.apache.org/jira/browse/PHOENIX-4785>,  you can apply the patch 
>> on top of 4.14 and see if it fixes your problem.
>> 
>> Regards,
>> Ankit Singhal
>> 
>> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> Any advices? Helps?
>> I can reproduce problem and capture more logs if needed.
>> 
>>> On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> 
>>> Looks like lock goes away 30 minutes after index region split.
>>> So i can assume that this issue comes from cache that configured by this 
>>> option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
>>> 
>>> 
>>> 
>>>> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>> wrote:
>>>> 
>>>> And how this split looks at Master logs:
>>>> 
>>>> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
>>>> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
>>>> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
>>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>>> state=SPLITTING, ts=1537461904888, server=prod
>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
>>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>>> state=SPLIT, ts=1537461905340, server=pro
>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
>>>> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
>>>> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
>>>> state=OPEN, ts=1537461905341, server=
>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>>> {acb8f16a004a894c8706f6e12cd26144 state=SPLITTING_NEW, ts=1537461905340, 
>>>> server=prod013,60020,1537304282885} to {acb8f16a004a894c8706f6e12cd26144 
>>>> state=OPEN, ts=1537461905341, server=
>>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,343 INFO  
>>>> [AM.ZK.Worker-pool5-t284] master.AssignmentManager: Handled SPLIT event; 
>>>> parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>>>>  daughter a=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1
>>>> Sep 20 19:47:41 prod001 hbase[10838]: 2018-09-20 19:47:41,972 INFO  
>>>> [prod001,6,1537304851459_ChoreService_2] 
>>>> balancer.StochasticLoadBalancer: Skipping load balancing because balanced 
>>>> cluster; total cost is 17.82282205608522, sum multiplier is 1102.0 min 
>>>> cost which need balance is 0.05
>>>> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,021 INFO  
>>>> [prod001,6,1537304851459_ChoreService_1] hbase.MetaTableAccessor: 
>>>> Deleted 
>>>> IDX_MARK_O,\x107834005168\x00

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-26 Thread Batyrshin Alexander
Thank you. We will try somehow...
Is there any chance that this fix will be included in next release for 
HBASE-1.4 (not 2.0)?

> On 27 Sep 2018, at 01:04, Ankit Singhal  wrote:
> 
> You might be hitting PHOENIX-4785 
> <https://jira.apache.org/jira/browse/PHOENIX-4785>,  you can apply the patch 
> on top of 4.14 and see if it fixes your problem.
> 
> Regards,
> Ankit Singhal
> 
> On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Any advices? Helps?
> I can reproduce problem and capture more logs if needed.
> 
>> On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> 
>> Looks like lock goes away 30 minutes after index region split.
>> So i can assume that this issue comes from cache that configured by this 
>> option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
>> 
>> 
>> 
>>> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> 
>>> And how this split looks at Master logs:
>>> 
>>> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
>>> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
>>> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>> state=SPLITTING, ts=1537461904888, server=prod
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
>>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>>> state=SPLIT, ts=1537461905340, server=pro
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
>>> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
>>> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
>>> state=OPEN, ts=1537461905341, server=
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>>> {acb8f16a004a894c8706f6e12cd26144 state=SPLITTING_NEW, ts=1537461905340, 
>>> server=prod013,60020,1537304282885} to {acb8f16a004a894c8706f6e12cd26144 
>>> state=OPEN, ts=1537461905341, server=
>>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,343 INFO  
>>> [AM.ZK.Worker-pool5-t284] master.AssignmentManager: Handled SPLIT event; 
>>> parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>>>  daughter a=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1
>>> Sep 20 19:47:41 prod001 hbase[10838]: 2018-09-20 19:47:41,972 INFO  
>>> [prod001,6,1537304851459_ChoreService_2] 
>>> balancer.StochasticLoadBalancer: Skipping load balancing because balanced 
>>> cluster; total cost is 17.82282205608522, sum multiplier is 1102.0 min cost 
>>> which need balance is 0.05
>>> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,021 INFO  
>>> [prod001,6,1537304851459_ChoreService_1] hbase.MetaTableAccessor: 
>>> Deleted 
>>> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.
>>> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,022 INFO  
>>> [prod001,6,1537304851459_ChoreService_1] master.CatalogJanitor: Scanned 
>>> 779 catalog row(s), gc'd 0 unreferenced merged region(s) and 1 unreferenced 
>>> parent region(s)
>>> 
>>>> On 20 Sep 2018, at 21:43, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>> wrote:
>>>> 
>>>> Looks like problem was because of index region split
>>>> 
>>>> Index region split at prod013:
>>>> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
>>>> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
>>>> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
>>>> master. 
>>>> Parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,15366379

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-26 Thread Batyrshin Alexander
Any advices? Helps?
I can reproduce problem and capture more logs if needed.

> On 21 Sep 2018, at 02:13, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> Looks like lock goes away 30 minutes after index region split.
> So i can assume that this issue comes from cache that configured by this 
> option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
> 
> 
> 
>> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> 
>> And how this split looks at Master logs:
>> 
>> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
>> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
>> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>> state=SPLITTING, ts=1537461904888, server=prod
>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
>> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
>> state=SPLIT, ts=1537461905340, server=pro
>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
>> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
>> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
>> state=OPEN, ts=1537461905341, server=
>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
>> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
>> {acb8f16a004a894c8706f6e12cd26144 state=SPLITTING_NEW, ts=1537461905340, 
>> server=prod013,60020,1537304282885} to {acb8f16a004a894c8706f6e12cd26144 
>> state=OPEN, ts=1537461905341, server=
>> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,343 INFO  
>> [AM.ZK.Worker-pool5-t284] master.AssignmentManager: Handled SPLIT event; 
>> parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>>  daughter a=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1
>> Sep 20 19:47:41 prod001 hbase[10838]: 2018-09-20 19:47:41,972 INFO  
>> [prod001,6,1537304851459_ChoreService_2] 
>> balancer.StochasticLoadBalancer: Skipping load balancing because balanced 
>> cluster; total cost is 17.82282205608522, sum multiplier is 1102.0 min cost 
>> which need balance is 0.05
>> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,021 INFO  
>> [prod001,6,1537304851459_ChoreService_1] hbase.MetaTableAccessor: 
>> Deleted 
>> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.
>> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,022 INFO  
>> [prod001,6,1537304851459_ChoreService_1] master.CatalogJanitor: Scanned 
>> 779 catalog row(s), gc'd 0 unreferenced merged region(s) and 1 unreferenced 
>> parent region(s)
>> 
>>> On 20 Sep 2018, at 21:43, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> 
>>> Looks like problem was because of index region split
>>> 
>>> Index region split at prod013:
>>> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
>>> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
>>> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
>>> master. 
>>> Parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>>>  new regions: 
>>> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1537461904877.33cba925c7acb347ac3f5e70e839c3cb.,
>>>  
>>> IDX_MARK_O,\x107834005168\x46200068=4YF!YI,1537461904877.acb8f16a004a894c8706f6e12cd26144..
>>>  Split took 0sec
>>> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
>>> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
>>> regionserver.SplitRequest: Split transaction journal:
>>> Sep 20 19:45:05 prod013 hbase[193055]: STARTED at 1537461904853
>>> Sep 20 19:45:05 prod013 hbase[193055]: PREPARED at 1537461904877
>>> Sep 20 19:45:05 prod013 hbase[193055]: BEFORE_PRE_SPLIT_HOOK at 
>>> 1537461904877
>>> Se

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-20 Thread Batyrshin Alexander
Looks like lock goes away 30 minutes after index region split.
So i can assume that this issue comes from cache that configured by this 
option: phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs



> On 21 Sep 2018, at 00:15, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> And how this split looks at Master logs:
> 
> Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
> [AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
> {3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
> state=SPLITTING, ts=1537461904888, server=prod
> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
> {3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
> server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
> state=SPLIT, ts=1537461905340, server=pro
> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
> [AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
> 3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
> {33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
> server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
> state=OPEN, ts=1537461905341, server=
> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
> [AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
> {acb8f16a004a894c8706f6e12cd26144 state=SPLITTING_NEW, ts=1537461905340, 
> server=prod013,60020,1537304282885} to {acb8f16a004a894c8706f6e12cd26144 
> state=OPEN, ts=1537461905341, server=
> Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,343 INFO  
> [AM.ZK.Worker-pool5-t284] master.AssignmentManager: Handled SPLIT event; 
> parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>  daughter a=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1
> Sep 20 19:47:41 prod001 hbase[10838]: 2018-09-20 19:47:41,972 INFO  
> [prod001,6,1537304851459_ChoreService_2] balancer.StochasticLoadBalancer: 
> Skipping load balancing because balanced cluster; total cost is 
> 17.82282205608522, sum multiplier is 1102.0 min cost which need balance is 
> 0.05
> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,021 INFO  
> [prod001,6,1537304851459_ChoreService_1] hbase.MetaTableAccessor: Deleted 
> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.
> Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,022 INFO  
> [prod001,6,1537304851459_ChoreService_1] master.CatalogJanitor: Scanned 
> 779 catalog row(s), gc'd 0 unreferenced merged region(s) and 1 unreferenced 
> parent region(s)
> 
>> On 20 Sep 2018, at 21:43, Batyrshin Alexander <0x62...@gmail.com> wrote:
>> 
>> Looks like problem was because of index region split
>> 
>> Index region split at prod013:
>> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
>> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
>> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
>> master. 
>> Parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>>  new regions: 
>> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1537461904877.33cba925c7acb347ac3f5e70e839c3cb.,
>>  
>> IDX_MARK_O,\x107834005168\x46200068=4YF!YI,1537461904877.acb8f16a004a894c8706f6e12cd26144..
>>  Split took 0sec
>> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
>> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
>> regionserver.SplitRequest: Split transaction journal:
>> Sep 20 19:45:05 prod013 hbase[193055]: STARTED at 1537461904853
>> Sep 20 19:45:05 prod013 hbase[193055]: PREPARED at 1537461904877
>> Sep 20 19:45:05 prod013 hbase[193055]: BEFORE_PRE_SPLIT_HOOK at 
>> 1537461904877
>> Sep 20 19:45:05 prod013 hbase[193055]: AFTER_PRE_SPLIT_HOOK at 
>> 1537461904877
>> Sep 20 19:45:05 prod013 hbase[193055]: SET_SPLITTING at 1537461904880
>> Sep 20 19:45:05 prod013 hbase[193055]: CREATE_SPLIT_DIR at 
>> 1537461904987
>> Sep 20 19:45:05 prod013 hbase[193055]: CLOSED_PARENT_REGION at 
>> 1537461905002
>> Sep 20 19:45:05 prod013 hbase[193055]: OFFLINED_PARENT at 
>> 1537461905002
>> Sep 20 19:45:05 prod013 hbase[193055]: STARTE

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-20 Thread Batyrshin Alexander
And how this split looks at Master logs:

Sep 20 19:45:04 prod001 hbase[10838]: 2018-09-20 19:45:04,888 INFO  
[AM.ZK.Worker-pool5-t282] master.RegionStates: Transition 
{3e44b85ddf407da831dbb9a871496986 state=OPEN, ts=1537304859509, 
server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
state=SPLITTING, ts=1537461904888, server=prod
Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
[AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
{3e44b85ddf407da831dbb9a871496986 state=SPLITTING, ts=1537461905340, 
server=prod013,60020,1537304282885} to {3e44b85ddf407da831dbb9a871496986 
state=SPLIT, ts=1537461905340, server=pro
Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,340 INFO  
[AM.ZK.Worker-pool5-t284] master.RegionStates: Offlined 
3e44b85ddf407da831dbb9a871496986 from prod013,60020,1537304282885
Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
[AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
{33cba925c7acb347ac3f5e70e839c3cb state=SPLITTING_NEW, ts=1537461905340, 
server=prod013,60020,1537304282885} to {33cba925c7acb347ac3f5e70e839c3cb 
state=OPEN, ts=1537461905341, server=
Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,341 INFO  
[AM.ZK.Worker-pool5-t284] master.RegionStates: Transition 
{acb8f16a004a894c8706f6e12cd26144 state=SPLITTING_NEW, ts=1537461905340, 
server=prod013,60020,1537304282885} to {acb8f16a004a894c8706f6e12cd26144 
state=OPEN, ts=1537461905341, server=
Sep 20 19:45:05 prod001 hbase[10838]: 2018-09-20 19:45:05,343 INFO  
[AM.ZK.Worker-pool5-t284] master.AssignmentManager: Handled SPLIT event; 
parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
 daughter a=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1
Sep 20 19:47:41 prod001 hbase[10838]: 2018-09-20 19:47:41,972 INFO  
[prod001,6,1537304851459_ChoreService_2] balancer.StochasticLoadBalancer: 
Skipping load balancing because balanced cluster; total cost is 
17.82282205608522, sum multiplier is 1102.0 min cost which need balance is 0.05
Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,021 INFO  
[prod001,6,1537304851459_ChoreService_1] hbase.MetaTableAccessor: Deleted 
IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.
Sep 20 19:47:42 prod001 hbase[10838]: 2018-09-20 19:47:42,022 INFO  
[prod001,6,1537304851459_ChoreService_1] master.CatalogJanitor: Scanned 779 
catalog row(s), gc'd 0 unreferenced merged region(s) and 1 unreferenced parent 
region(s)

> On 20 Sep 2018, at 21:43, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> Looks like problem was because of index region split
> 
> Index region split at prod013:
> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
> regionserver.SplitRequest: Region split, hbase:meta updated, and report to 
> master. 
> Parent=IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1536637905252.3e44b85ddf407da831dbb9a871496986.,
>  new regions: 
> IDX_MARK_O,\x107834005168\x46200020LWfBS4c,1537461904877.33cba925c7acb347ac3f5e70e839c3cb.,
>  
> IDX_MARK_O,\x107834005168\x46200068=4YF!YI,1537461904877.acb8f16a004a894c8706f6e12cd26144..
>  Split took 0sec
> Sep 20 19:45:05 prod013 hbase[193055]: 2018-09-20 19:45:05,441 INFO  
> [regionserver/prod013/10.0.0.13:60020-splits-1537400010677] 
> regionserver.SplitRequest: Split transaction journal:
> Sep 20 19:45:05 prod013 hbase[193055]: STARTED at 1537461904853
> Sep 20 19:45:05 prod013 hbase[193055]: PREPARED at 1537461904877
> Sep 20 19:45:05 prod013 hbase[193055]: BEFORE_PRE_SPLIT_HOOK at 
> 1537461904877
> Sep 20 19:45:05 prod013 hbase[193055]: AFTER_PRE_SPLIT_HOOK at 
> 1537461904877
> Sep 20 19:45:05 prod013 hbase[193055]: SET_SPLITTING at 1537461904880
> Sep 20 19:45:05 prod013 hbase[193055]: CREATE_SPLIT_DIR at 
> 1537461904987
> Sep 20 19:45:05 prod013 hbase[193055]: CLOSED_PARENT_REGION at 
> 1537461905002
> Sep 20 19:45:05 prod013 hbase[193055]: OFFLINED_PARENT at 
> 1537461905002
> Sep 20 19:45:05 prod013 hbase[193055]: STARTED_REGION_A_CREATION at 
> 1537461905056
> Sep 20 19:45:05 prod013 hbase[193055]: STARTED_REGION_B_CREATION at 
> 1537461905131
> Sep 20 19:45:05 prod013 hbase[193055]: PONR at 1537461905192
> Sep 20 19:45:05 prod013 hbase[193055]: OPENED_REGION_A at 
> 1537461905249
> Sep 20 19:45:05 prod013 hbase[193055]: OPENED_REGION_B at 
> 1537461905252
> Sep 20 19:45:05 prod013 hbase[193055]: BEFORE_POST_SPLIT_HOOK at 
> 1537461905439
> Sep 20 19:45:05 prod013 hbase[193055]: AFTER_POST_SPLIT_HOOK at 
> 1537461905439
> Sep 20 19:45:05 prod013 hbase[193055]:

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-20 Thread Batyrshin Alexander
 20 20:09:24 prod002 hbase[97285]: 2018-09-20 20:09:24,572 INFO  
[RpcServer.default.FPBQ.Fifo.handler=98,queue=8,port=60020-SendThread(10.0.0.3:2181)]
 zookeeper.ClientCnxn: Session establishment complete on server 
10.0.0.3/10.0.0.3:2181, sessionid = 0x3e039e01c7f, negotiated timeout = 
4
Sep 20 20:09:24 prod002 hbase[97285]: 2018-09-20 20:09:24,628 INFO  
[RpcServer.default.FPBQ.Fifo.handler=98,queue=8,port=60020] 
index.PhoenixIndexFailurePolicy: Successfully update INDEX_DISABLE_TIMESTAMP 
for IDX_MARK_O due to an exception while writing updates. 
indexState=PENDING_DISABLE
Sep 20 20:09:24 prod002 hbase[97285]: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[IDX_MARK_O]
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:235)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:620)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:595)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:578)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3646)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3108)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3050)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2405)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2359)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
Sep 20 20:09:24 prod002 hbase[97285]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Sep 20 20:09:24 prod002 hbase[97285]: 2018-09-20 20:09:24,632 INFO  
[RpcServer.default.FPBQ.Fifo.handler=98,queue=8,port=60020] 
util.IndexManagementUtil: Rethrowing 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to the 
index failed.  disableIndexOnFailure=true, Failed to write to multiple index 
tables: [IDX_MARK_O] ,serverTimestamp=1537463364504,


> On 20 Sep 2018, at 21:01, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> Our setup:
> HBase-1.4.7
> Phoenix-4.14-hbase-1.4
> 
> 
>> On 20 Sep 2018, at 20:19, Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> 
>>  Hello,
>> Looks live we got dead lock with repeating "ERROR 1120 (XCL20)" exception. 
>> At this time all indexes is ACTIVE.
>> Can you help to make deeper diagnose?
>> 
>> java.sql.SQLEx

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-20 Thread Batyrshin Alexander
Our setup:
HBase-1.4.7
Phoenix-4.14-hbase-1.4


> On 20 Sep 2018, at 20:19, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
>  Hello,
> Looks live we got dead lock with repeating "ERROR 1120 (XCL20)" exception. At 
> this time all indexes is ACTIVE.
> Can you help to make deeper diagnose?
> 
> java.sql.SQLException: ERROR 1120 (XCL20): Writes to table blocked until 
> index can be updated. tableName=TBL_MARK
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.execute.MutationState.validateAndGetServerTimestamp(MutationState.java:815)
>   at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:789)
>   at org.apache.phoenix.execute.MutationState.send(MutationState.java:981)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1514)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1337)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
>   at 
> x.persistence.phoenix.PhoenixDao.$anonfun$doUpsert$1(PhoenixDao.scala:103)
>   at scala.util.Try$.apply(Try.scala:209)
>   at x.persistence.phoenix.PhoenixDao.doUpsert(PhoenixDao.scala:101)
>   at 
> x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$2(PhoenixDao.scala:45)
>   at 
> x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$2$adapted(PhoenixDao.scala:45)
>   at scala.collection.immutable.Stream.flatMap(Stream.scala:486)
>   at 
> scala.collection.immutable.Stream.$anonfun$flatMap$1(Stream.scala:494)
>   at scala.collection.immutable.Stream.$anonfun$append$1(Stream.scala:252)
>   at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1169)
>   at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1159)
>   at scala.collection.immutable.Stream.length(Stream.scala:309)
>   at scala.collection.SeqLike.size(SeqLike.scala:105)
>   at scala.collection.SeqLike.size$(SeqLike.scala:105)
>   at scala.collection.AbstractSeq.size(Seq.scala:41)
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:285)
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
>   at scala.collection.AbstractTraversable.toArray(Traversable.scala:104)
>   at 
> x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$1(PhoenixDao.scala:45)
>   at scala.util.Try$.apply(Try.scala:209)
>   at x.persistence.phoenix.PhoenixDao.batchInsert(PhoenixDao.scala:45)
>   at 
> x.persistence.phoenix.PhoenixDao.$anonfun$insert$2(PhoenixDao.scala:35)
>   at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
>   at scala.util.Success.$anonfun$map$1(Try.scala:251)
>   at scala.util.Success.map(Try.scala:209)
>   at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
>   at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
>   at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 



Re: MutationState size is bigger than maximum allowed number of bytes

2018-09-20 Thread Batyrshin Alexander
Nope, it was client side config.
Thank you for response.

> On 20 Sep 2018, at 05:36, Jaanai Zhang  wrote:
> 
> Are you configuring these on the server side?   Your “UPSERT SELECT” grammar 
> will be executed on the server side.
> 
> 
>Jaanai Zhang
>Best regards!
> 
> 
> 
> Batyrshin Alexander <0x62...@gmail.com <mailto:0x62...@gmail.com>> 
> 于2018年9月20日周四 上午7:48写道:
> I've tried to copy one table to other via UPSERT SELECT construction and got 
> this errors:
> 
> Phoenix-4.14-hbase-1.4
> 
> 0: jdbc:phoenix:> !autocommit on
> Autocommit status: true
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:> UPSERT INTO TABLE_V2 ("c", "id", "gt")
> . . . . . . . . > SELECT "c", "id", "gt" FROM TABLE;
> Error: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed 
> number of bytes (state=LIM02,code=730)
> java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:377)
> at org.apache.phoenix.execute.MutationState.join(MutationState.java:478)
> at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory$1.close(MutatingParallelIteratorFactory.java:98)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:104)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.peek(ConcatResultIterator.java:112)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:100)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at org.apache.phoenix.trace.TracingIterator.next(TracingIterator.java:56)
> at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1301)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
> 
> Config:
> 
> 
> phoenix.mutate.batchSize
> 200
> 
> 
> phoenix.mutate.maxSize
> 25
> 
> 
> phoenix.mutate.maxSizeBytes
> 10485760
> 
> 
> Also mentioned this at https://issues.apache.org/jira/browse/PHOENIX-4671 
> <https://issues.apache.org/jira/browse/PHOENIX-4671>


Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-20 Thread Batyrshin Alexander
 Hello,
Looks live we got dead lock with repeating "ERROR 1120 (XCL20)" exception. At 
this time all indexes is ACTIVE.
Can you help to make deeper diagnose?

java.sql.SQLException: ERROR 1120 (XCL20): Writes to table blocked until index 
can be updated. tableName=TBL_MARK
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.execute.MutationState.validateAndGetServerTimestamp(MutationState.java:815)
at 
org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:789)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:981)
at 
org.apache.phoenix.execute.MutationState.send(MutationState.java:1514)
at 
org.apache.phoenix.execute.MutationState.commit(MutationState.java:1337)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$doUpsert$1(PhoenixDao.scala:103)
at scala.util.Try$.apply(Try.scala:209)
at x.persistence.phoenix.PhoenixDao.doUpsert(PhoenixDao.scala:101)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$2(PhoenixDao.scala:45)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$2$adapted(PhoenixDao.scala:45)
at scala.collection.immutable.Stream.flatMap(Stream.scala:486)
at 
scala.collection.immutable.Stream.$anonfun$flatMap$1(Stream.scala:494)
at scala.collection.immutable.Stream.$anonfun$append$1(Stream.scala:252)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1169)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1159)
at scala.collection.immutable.Stream.length(Stream.scala:309)
at scala.collection.SeqLike.size(SeqLike.scala:105)
at scala.collection.SeqLike.size$(SeqLike.scala:105)
at scala.collection.AbstractSeq.size(Seq.scala:41)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:285)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
at scala.collection.AbstractTraversable.toArray(Traversable.scala:104)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$batchInsert$1(PhoenixDao.scala:45)
at scala.util.Try$.apply(Try.scala:209)
at x.persistence.phoenix.PhoenixDao.batchInsert(PhoenixDao.scala:45)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$insert$2(PhoenixDao.scala:35)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)



MutationState size is bigger than maximum allowed number of bytes

2018-09-19 Thread Batyrshin Alexander
I've tried to copy one table to other via UPSERT SELECT construction and got 
this errors:

Phoenix-4.14-hbase-1.4

0: jdbc:phoenix:> !autocommit on
Autocommit status: true
0: jdbc:phoenix:>
0: jdbc:phoenix:> UPSERT INTO TABLE_V2 ("c", "id", "gt")
. . . . . . . . > SELECT "c", "id", "gt" FROM TABLE;
Error: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed 
number of bytes (state=LIM02,code=730)
java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than 
maximum allowed number of bytes
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:377)
at org.apache.phoenix.execute.MutationState.join(MutationState.java:478)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory$1.close(MutatingParallelIteratorFactory.java:98)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:104)
at 
org.apache.phoenix.iterate.ConcatResultIterator.peek(ConcatResultIterator.java:112)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:100)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at org.apache.phoenix.trace.TracingIterator.next(TracingIterator.java:56)
at 
org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1301)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

Config:


phoenix.mutate.batchSize
200


phoenix.mutate.maxSize
25


phoenix.mutate.maxSizeBytes
10485760


Also mentioned this at https://issues.apache.org/jira/browse/PHOENIX-4671

Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Indeed. I see that this exception was thrown somewhere near docker restart time.
Thank you for response.


> On 20 Sep 2018, at 02:34, Sergey Soldatov  wrote:
> 
> That might be a misleading message. Actually, that means that JVM shutdown 
> has been triggered (so runtime has executed the shutdown hook for the driver 
> and that's the only place where we set this message) and after that, another 
> thread was trying to create a new connection. 
> 
> Thanks,
> Sergey 
> 
> On Wed, Sep 19, 2018 at 11:17 AM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Version:
> 
> Phoenix-4.14.0-HBase-1.4
> 
> Full trace is:
> 
> java.lang.IllegalStateException: Phoenix driver closed because server is 
> shutting down
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:270)
> at 
> x.persistence.phoenix.ConnectionManager.get(ConnectionManager.scala:12)
> at 
> x.persistence.phoenix.PhoenixDao.$anonfun$count$1(PhoenixDao.scala:58)
> at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
> at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
> at scala.util.Success.$anonfun$map$1(Try.scala:251)
> at scala.util.Success.map(Try.scala:209)
> at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
> at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
> at 
> scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 
> 
> 
> 
> > On 19 Sep 2018, at 20:13, Josh Elser  > <mailto:els...@apache.org>> wrote:
> > 
> > What version of Phoenix are you using? Is this the full stack trace you see 
> > that touches Phoenix (or HBase) classes?
> > 
> > On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
> >> Is there any reason for this exception? Which exactly server is shutting 
> >> down if we use quorum of zookepers?
> >> java.lang.IllegalStateException: Phoenix driver closed because server is 
> >> shutting down at 
> >> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> >>  at 
> >> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285) 
> >> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) 
> >> at java.sql.DriverManager.getConnection(DriverManager.java:664) at 
> >> java.sql.DriverManager.getConnection(DriverManager.java:270)
> 



Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Version:

Phoenix-4.14.0-HBase-1.4

Full trace is:

java.lang.IllegalStateException: Phoenix driver closed because server is 
shutting down
at 
org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
at 
org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
x.persistence.phoenix.ConnectionManager.get(ConnectionManager.scala:12)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$count$1(PhoenixDao.scala:58)
at 
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




> On 19 Sep 2018, at 20:13, Josh Elser  wrote:
> 
> What version of Phoenix are you using? Is this the full stack trace you see 
> that touches Phoenix (or HBase) classes?
> 
> On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
>> Is there any reason for this exception? Which exactly server is shutting 
>> down if we use quorum of zookepers?
>> java.lang.IllegalStateException: Phoenix driver closed because server is 
>> shutting down at 
>> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
>>  at 
>> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285) at 
>> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) at 
>> java.sql.DriverManager.getConnection(DriverManager.java:664) at 
>> java.sql.DriverManager.getConnection(DriverManager.java:270)



IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Is there any reason for this exception? Which exactly server is shutting down 
if we use quorum of zookepers?

java.lang.IllegalStateException: Phoenix driver closed because server is 
shutting down
at 
org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
at 
org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)

Re: ABORTING region server and following HBase cluster "crash"

2018-09-15 Thread Batyrshin Alexander
I've found that we still not configured this:

hbase.region.server.rpc.scheduler.factory.class = 
org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory

Can this misconfiguration leads to our problems?

> On 15 Sep 2018, at 02:04, Sergey Soldatov  wrote:
> 
> That was the real problem quite a long time ago (couple years?). Can't say 
> for sure in which version that was fixed, but now indexes has a priority over 
> regular tables and their regions open first. So by the moment when we replay 
> WALs for tables, all index regions are supposed to be online. If you see the 
> problem on recent versions that usually means that cluster is not healthy and 
> some of the index regions stuck in RiT state.
> 
> Thanks,
> Sergey
> 
> On Thu, Sep 13, 2018 at 8:12 PM Jonathan Leech  <mailto:jonat...@gmail.com>> wrote:
> This seems similar to a failure scenario I’ve seen a couple times. I believe 
> after multiple restarts you got lucky and tables were brought up by Hbase in 
> the correct order. 
> 
> What happens is some kind of semi-catastrophic failure where 1 or more region 
> servers go down with edits that weren’t flushed, and are only in the WAL. 
> These edits belong to regions whose tables have secondary indexes. Hbase 
> wants to replay the WAL before bringing up the region server. Phoenix wants 
> to talk to the index region during this, but can’t. It fails enough times 
> then stops. 
> 
> The more region servers / tables / indexes affected, the more likely that a 
> full restart will get stuck in a classic deadlock. A good old-fashioned data 
> center outage is a great way to get started with this kind of problem. You 
> might make some progress and get stuck again, or restart number N might get 
> those index regions initialized before the main table. 
> 
> The sure fire way to recover a cluster in this condition is to strategically 
> disable all the tables that are failing to come up. You can do this from the 
> Hbase shell as long as the master is running. If I remember right, it’s a 
> pain since the disable command will hang. You might need to disable a table, 
> kill the shell, disable the next table, etc. Then restart. You’ll eventually 
> have a cluster with all the region servers finally started, and a bunch of 
> disabled regions. If you disabled index tables, enable one, wait for it to 
> become available; eg its WAL edits will be replayed, then enable the 
> associated main table and wait for it to come online. If Hbase did it’s job 
> without error, and your failure didn’t include losing 4 disks at once, order 
> will be restored. Lather, rinse, repeat until everything is enabled and 
> online. 
> 
>  A big enough failure sprinkled with a little bit of bad luck and what 
> seems to be a Phoenix flaw == deadlock trying to get HBASE to start up. Fix 
> by forcing the order that Hbase brings regions online. Finally, never go full 
> restart. 
> 
> > On Sep 10, 2018, at 7:30 PM, Batyrshin Alexander <0x62...@gmail.com 
> > <mailto:0x62...@gmail.com>> wrote:
> > 
> > After update web interface at Master show that every region server now 
> > 1.4.7 and no RITS.
> > 
> > Cluster recovered only when we restart all regions servers 4 times...
> > 
> >> On 11 Sep 2018, at 04:08, Josh Elser  >> <mailto:els...@apache.org>> wrote:
> >> 
> >> Did you update the HBase jars on all RegionServers?
> >> 
> >> Make sure that you have all of the Regions assigned (no RITs). There could 
> >> be a pretty simple explanation as to why the index can't be written to.
> >> 
> >>> On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
> >>> Correct me if im wrong.
> >>> But looks like if you have A and B region server that has index and 
> >>> primary table then possible situation like this.
> >>> A and B under writes on table with indexes
> >>> A - crash
> >>> B failed on index update because A is not operating then B starting 
> >>> aborting
> >>> A after restart try to rebuild index from WAL but B at this time is 
> >>> aborting then A starting aborting too
> >>> From this moment nothing happens (0 requests to region servers) and A and 
> >>> B is not responsible from Master-status web interface
> >>>> On 9 Sep 2018, at 04:38, Batyrshin Alexander <0x62...@gmail.com 
> >>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
> >>>> <mailto:0x62...@gmail.com>>> wrote:
> >>>> 
> >>>> After update we still can't recover HBase cluster. Our region servers 
> >>>> ABORTING over and over:
>

Re: ABORTING region server and following HBase cluster "crash"

2018-09-10 Thread Batyrshin Alexander
After update web interface at Master show that every region server now 1.4.7 
and no RITS.

Cluster recovered only when we restart all regions servers 4 times...

> On 11 Sep 2018, at 04:08, Josh Elser  wrote:
> 
> Did you update the HBase jars on all RegionServers?
> 
> Make sure that you have all of the Regions assigned (no RITs). There could be 
> a pretty simple explanation as to why the index can't be written to.
> 
> On 9/9/18 3:46 PM, Batyrshin Alexander wrote:
>> Correct me if im wrong.
>> But looks like if you have A and B region server that has index and primary 
>> table then possible situation like this.
>> A and B under writes on table with indexes
>> A - crash
>> B failed on index update because A is not operating then B starting aborting
>> A after restart try to rebuild index from WAL but B at this time is aborting 
>> then A starting aborting too
>> From this moment nothing happens (0 requests to region servers) and A and B 
>> is not responsible from Master-status web interface
>>> On 9 Sep 2018, at 04:38, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>> wrote:
>>> 
>>> After update we still can't recover HBase cluster. Our region servers 
>>> ABORTING over and over:
>>> 
>>> prod003:
>>> Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=92,queue=2,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536446665703: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=77,queue=7,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536446665703: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:52:19 prod003 hbase[1440]: 2018-09-09 02:52:19,224 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=82,queue=2,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536446665703: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:52:28 prod003 hbase[1440]: 2018-09-09 02:52:28,922 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=94,queue=4,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536446665703: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:55:02 prod003 hbase[957]: 2018-09-09 02:55:02,096 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=95,queue=5,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536450772841: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:55:18 prod003 hbase[957]: 2018-09-09 02:55:18,793 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=97,queue=7,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod003,60020,1536450772841: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> 
>>> prod004:
>>> Sep 09 02:52:13 prod004 hbase[4890]: 2018-09-09 02:52:13,541 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=83,queue=3,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod004,60020,1536446387325: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:52:50 prod004 hbase[4890]: 2018-09-09 02:52:50,264 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=75,queue=5,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod004,60020,1536446387325: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:53:40 prod004 hbase[4890]: 2018-09-09 02:53:40,709 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=66,queue=6,port=60020] 
>>> regionserver.HRegionServer: ABORTING region server 
>>> prod004,60020,1536446387325: Could not update the index table, killing 
>>> server region because couldn't write to an index table
>>> Sep 09 02:54:00 prod004 hbase[4890]: 2018-09-09 02:54:00,060 FATAL 
>>> [RpcServer.default.FPBQ.Fifo.handler=89,queue=9,port=60020] 
>>

Re: ABORTING region server and following HBase cluster "crash"

2018-09-09 Thread Batyrshin Alexander
Correct me if im wrong.

But looks like if you have A and B region server that has index and primary 
table then possible situation like this.

A and B under writes on table with indexes
A - crash
B failed on index update because A is not operating then B starting aborting
A after restart try to rebuild index from WAL but B at this time is aborting 
then A starting aborting too
From this moment nothing happens (0 requests to region servers) and A and B is 
not responsible from Master-status web interface


> On 9 Sep 2018, at 04:38, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> After update we still can't recover HBase cluster. Our region servers 
> ABORTING over and over:
> 
> prod003:
> Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=92,queue=2,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536446665703: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:51:27 prod003 hbase[1440]: 2018-09-09 02:51:27,395 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=77,queue=7,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536446665703: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:52:19 prod003 hbase[1440]: 2018-09-09 02:52:19,224 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=82,queue=2,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536446665703: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:52:28 prod003 hbase[1440]: 2018-09-09 02:52:28,922 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=94,queue=4,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536446665703: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:55:02 prod003 hbase[957]: 2018-09-09 02:55:02,096 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=95,queue=5,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536450772841: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:55:18 prod003 hbase[957]: 2018-09-09 02:55:18,793 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=97,queue=7,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536450772841: Could not update the index table, killing server 
> region because couldn't write to an index table
> 
> prod004:
> Sep 09 02:52:13 prod004 hbase[4890]: 2018-09-09 02:52:13,541 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=83,queue=3,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod004,60020,1536446387325: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:52:50 prod004 hbase[4890]: 2018-09-09 02:52:50,264 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=75,queue=5,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod004,60020,1536446387325: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:53:40 prod004 hbase[4890]: 2018-09-09 02:53:40,709 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=66,queue=6,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod004,60020,1536446387325: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:54:00 prod004 hbase[4890]: 2018-09-09 02:54:00,060 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=89,queue=9,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod004,60020,1536446387325: Could not update the index table, killing server 
> region because couldn't write to an index table
> 
> prod005:
> Sep 09 02:52:50 prod005 hbase[3772]: 2018-09-09 02:52:50,661 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=65,queue=5,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod005,60020,153644649: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:53:27 prod005 hbase[3772]: 2018-09-09 02:53:27,542 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=90,queue=0,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod005,60020,153644649: Could not update the index table, killing server 
> region because couldn't write to an index table
> Sep 09 02:54:00 prod005 hbase[3772]: 2018-09-09 02:53:59,915 FATAL 
> [RpcServer.default.FPBQ.Fifo.handler=7,queue=7,port=60020] 
> regionserver.HRegionServer: ABORTING region server 
> prod005,60020,153644649: Co

Re: SKIP_SCAN on variable length keys

2018-09-09 Thread Batyrshin Alexander
Thank you for reply

> On 4 Sep 2018, at 21:04, Sergey Soldatov  wrote:
> 
> SKIP SCAN doesn't use FuzzyRowFilter. It has its own SkipScanFilter. If you 
> see problems, please provide more details or file a JIRA for that. 
> 
> Thanks,
> Sergey
> 
> On Wed, Aug 29, 2018 at 2:17 PM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> Im wondering is there any issue with SKIP SCAN when variable length columns 
> used in composite key?
> My suspicion comes from FuzzyRowFilter that takes fuzzy row key template with 
> fixed positions



Re: ABORTING region server and following HBase cluster "crash"

2018-09-08 Thread Batyrshin Alexander
]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:620)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:595)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:578)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3646)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3108)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3050)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:271)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatchWithRetries(UngroupedAggregateRegionObserver.java:241)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1068)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:386)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:239)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:287)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2843)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3080)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2354)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
Sep 09 02:54:30 prod005 hbase[3772]: at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)

> On 9 Sep 2018, at 01:44, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
> Thank you.
> We're updating our cluster right now...
> 
> 
>> On 9 Sep 2018, at 01:39, Ted Yu > <mailto:yuzhih...@gmail.com>> wrote:
>> 
>> It seems you should deploy hbase with the following fix:
>> 
>> HBASE-21069 NPE in StoreScanner.updateReaders causes RS to crash
>> 
>> 1.4.7 was recently released.
>> 
>> FYI
>> 
>> On Sat, Sep 8, 2018 at 3:32 PM Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>>  Hello,
>> 
>> We got this exception from prod006 server
>> 
>> Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL 
>> [MemStoreFlusher.1] regionserver.HRegionServer: ABORTING region server 
>> prod006,60020,1536235102833: Replay of WAL required. Forcing server shutdown
>> Sep 09 00:38:02 prod006 hbase[18907]: 
>> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
>> KM,c\xEF\xBF\xBD\x16I7\xEF\xBF\xBD\x0A"A\xEF\xBF\xBDd\xEF\xBF\xBD\xEF\xBF\xBD\

Re: ABORTING region server and following HBase cluster "crash"

2018-09-08 Thread Batyrshin Alexander
Thank you.
We're updating our cluster right now...


> On 9 Sep 2018, at 01:39, Ted Yu  wrote:
> 
> It seems you should deploy hbase with the following fix:
> 
> HBASE-21069 NPE in StoreScanner.updateReaders causes RS to crash
> 
> 1.4.7 was recently released.
> 
> FYI
> 
> On Sat, Sep 8, 2018 at 3:32 PM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> 
> We got this exception from prod006 server
> 
> Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL 
> [MemStoreFlusher.1] regionserver.HRegionServer: ABORTING region server 
> prod006,60020,1536235102833: Replay of WAL required. Forcing server shutdown
> Sep 09 00:38:02 prod006 hbase[18907]: 
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> KM,c\xEF\xBF\xBD\x16I7\xEF\xBF\xBD\x0A"A\xEF\xBF\xBDd\xEF\xBF\xBD\xEF\xBF\xBD\x19\x07t,1536178245576.60c121ba50e67f2429b9ca2ba2a11bad.
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2645)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2322)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2284)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2170)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2095)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> java.lang.Thread.run(Thread.java:748)
> Sep 09 00:38:02 prod006 hbase[18907]: Caused by: 
> java.lang.NullPointerException
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> java.util.ArrayList.(ArrayList.java:178)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:863)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1172)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1145)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:122)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2505)
> Sep 09 00:38:02 prod006 hbase[18907]: at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2600)
> Sep 09 00:38:02 prod006 hbase[18907]: ... 9 more
> Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL 
> [MemStoreFlusher.1] regionserver.HRegionServer: RegionServer abort: loaded 
> coprocessors are: 
> [org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator, 
> org.apache.phoenix.coprocessor.SequenceRegionObserver, org.apache.phoenix.c
> 
> After that we got ABORTING on almost every Region Servers in cluster with 
> different reasons:
> 
> prod003
> Sep 09 01:12:11 prod003 hbase[11552]: 2018-09-09 01:12:11,799 FATAL 
> [PostOpenDeployTasks:88bfac1dfd807c4cd1e9c1f31b4f053f] 
> regionserver.HRegionServer: ABORTING region server 
> prod003,60020,1536444066291: Exception running postOpenDeployTasks; 
> region=88bfac1dfd807c4cd1e9c1f31b4f053f
> Sep 09 01:12:11 prod003 hbase[11552]: java.io.InterruptedIOException: #139, 
> interrupted. currentNumberOfTask=8
> Sep 09 01:12:11 prod003 hbase[11552]: at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1853)
> Sep 09 01:12:11 prod003 hbase[11552]: at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1823)
> Sep 09 01:12:11 prod003 hbase[11552]: at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1899)
> Sep 09 01:12:11 prod003 hbase[11552]: at 
> org.apache.ha

ABORTING region server and following HBase cluster "crash"

2018-09-08 Thread Batyrshin Alexander
 Hello,

We got this exception from prod006 server

Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL 
[MemStoreFlusher.1] regionserver.HRegionServer: ABORTING region server 
prod006,60020,1536235102833: Replay of WAL required. Forcing server shutdown
Sep 09 00:38:02 prod006 hbase[18907]: 
org.apache.hadoop.hbase.DroppedSnapshotException: region: 
KM,c\xEF\xBF\xBD\x16I7\xEF\xBF\xBD\x0A"A\xEF\xBF\xBDd\xEF\xBF\xBD\xEF\xBF\xBD\x19\x07t,1536178245576.60c121ba50e67f2429b9ca2ba2a11bad.
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2645)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2322)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2284)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2170)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2095)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
Sep 09 00:38:02 prod006 hbase[18907]: at 
java.lang.Thread.run(Thread.java:748)
Sep 09 00:38:02 prod006 hbase[18907]: Caused by: java.lang.NullPointerException
Sep 09 00:38:02 prod006 hbase[18907]: at 
java.util.ArrayList.(ArrayList.java:178)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:863)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1172)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1145)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:122)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2505)
Sep 09 00:38:02 prod006 hbase[18907]: at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2600)
Sep 09 00:38:02 prod006 hbase[18907]: ... 9 more
Sep 09 00:38:02 prod006 hbase[18907]: 2018-09-09 00:38:02,532 FATAL 
[MemStoreFlusher.1] regionserver.HRegionServer: RegionServer abort: loaded 
coprocessors are: 
[org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator, 
org.apache.phoenix.coprocessor.SequenceRegionObserver, org.apache.phoenix.c

After that we got ABORTING on almost every Region Servers in cluster with 
different reasons:

prod003
Sep 09 01:12:11 prod003 hbase[11552]: 2018-09-09 01:12:11,799 FATAL 
[PostOpenDeployTasks:88bfac1dfd807c4cd1e9c1f31b4f053f] 
regionserver.HRegionServer: ABORTING region server prod003,60020,1536444066291: 
Exception running postOpenDeployTasks; region=88bfac1dfd807c4cd1e9c1f31b4f053f
Sep 09 01:12:11 prod003 hbase[11552]: java.io.InterruptedIOException: #139, 
interrupted. currentNumberOfTask=8
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1853)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.AsyncProcess.waitForMaximumCurrentTasks(AsyncProcess.java:1823)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1899)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:250)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:213)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1484)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.client.HTable.put(HTable.java:1031)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1033)
Sep 09 01:12:11 prod003 hbase[11552]: at 
org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1023)
Sep 09 01:12:11 prod003 hbase[11552]: at 

Re: Unable to find cached index metadata

2018-09-02 Thread Batyrshin Alexander
Yes, it's longer. 
Thank you. We will try to decrease batch size

> On 3 Sep 2018, at 04:14, Thomas D'Silva  wrote:
> 
> Is your cluster under heavy write load when you see these expceptions? How 
> long does it take to write a batch of mutations?
> If its longer than the config value of maxServerCacheTimeToLiveMs you will 
> see the exception because the index metadata expired from the cache.
> 
> 
> On Sun, Sep 2, 2018 at 4:02 PM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>   Hello all,
> We use mutable table with many indexes on it. On upserts we getting this 
> error:
> 
> o.a.phoenix.execute.MutationState - Swallowing exception and retrying after 
> clearing meta cache on connection. java.sql.SQLException: ERROR 2008 (INT10): 
> Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 2008 (INT10): 
> Unable to find cached index metadata. key=8283602185356160420 
> region=HISTORY,D\xEF\xBF\xBD\xEF\xBF\xBDNt\x1B\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD5\x1E\x01W\x02\xEF\xBF\xBD$,1531781097243.95d19923178a7d80fa55428b97816e3f.host=cloud016,60020,1535926087741
>  Index update failed
> 
> 
> Current config:
> phoenix-4.14.0-HBase-1.4
> phoenix.coprocessor.maxServerCacheTimeToLiveMs = 6
> ALTER TABLE HISTORY SET UPDATE_CACHE_FREQUENCY=6
> 



Unable to find cached index metadata

2018-09-02 Thread Batyrshin Alexander
  Hello all,
We use mutable table with many indexes on it. On upserts we getting this error:

o.a.phoenix.execute.MutationState - Swallowing exception and retrying after 
clearing meta cache on connection. java.sql.SQLException: ERROR 2008 (INT10): 
Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 2008 (INT10): 
Unable to find cached index metadata. key=8283602185356160420 
region=HISTORY,D\xEF\xBF\xBD\xEF\xBF\xBDNt\x1B\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD5\x1E\x01W\x02\xEF\xBF\xBD$,1531781097243.95d19923178a7d80fa55428b97816e3f.host=cloud016,60020,1535926087741
 Index update failed


Current config:
phoenix-4.14.0-HBase-1.4
phoenix.coprocessor.maxServerCacheTimeToLiveMs = 6
ALTER TABLE HISTORY SET UPDATE_CACHE_FREQUENCY=6

SKIP_SCAN on variable length keys

2018-08-29 Thread Batyrshin Alexander
 Hello,
Im wondering is there any issue with SKIP SCAN when variable length columns 
used in composite key?
My suspicion comes from FuzzyRowFilter that takes fuzzy row key template with 
fixed positions

Re: Statements caching

2018-08-15 Thread Batyrshin Alexander
https://phoenix.apache.org/faq.html#Should_I_pool_Phoenix_JDBC_Connections 
<https://phoenix.apache.org/faq.html#Should_I_pool_Phoenix_JDBC_Connections>
If we should recreate connection every time then statements caching looks 
useless.
Could you, please, explain in details what does it means "it is possible that 
the underlying HBase connection is not always left in a healthy state by the 
previous user"?

> On 28 Jul 2018, at 06:09, James Taylor  wrote:
> 
> There's no statement caching available in Phoenix. That would be a good 
> contribution, though.
> Thanks,
> James
> 
> On Thu, Jul 26, 2018 at 10:45 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hi all,
> Im wondering how to enable statement caching in Phoenix JDBC Driver.
> Is there anything like "cachePrepStmts"?
> 



Re: All the mappping column values is null when create table(includeing schema) in phoenix 4.12 for exists hbase table

2018-08-12 Thread Batyrshin Alexander
If you already have HBase table then set COLUMN_ENCODED_BYTES = NONE
http://phoenix.apache.org/namspace_mapping.html 
 - for details

> On 12 Aug 2018, at 10:34, && <331913...@qq.com> wrote:
> 
> Hi there,
>   I have a quest for phoenix user case, When i create include schema table 
> (not a view) to mapping exists hbase table, I got a null value list use 
> sqlline to "select * from t1". At the first time, I guest it's probable that 
> a bug, but i immediately prove my thoughts is wrong. Because it's work 
> correctly when i create not include schema table to mapping hbase table, It’s 
> return all the correct result.
>   I'm unsure whether certain behavior in Phoenix is a bug, Why and how to 
> resolve it. thanks
> 
>   Hbase 1.2
>   Phoenix 4.12



Statements caching

2018-07-26 Thread Batyrshin Alexander
 Hi all,
Im wondering how to enable statement caching in Phoenix JDBC Driver.
Is there anything like "cachePrepStmts"?


Re: Split and distribute regions of SYSTEM.STATS table

2018-04-22 Thread Batyrshin Alexander
If all stats for given table should be on the same region there is no benefits 
on splitting.

Another question: is it ok to set 'IN_MEMORY' => 'true' for CF of SYSTEM.* 
tables?

> On 20 Apr 2018, at 23:39, James Taylor <jamestay...@apache.org> wrote:
> 
> Thanks for bringing this to our attention. There's a bug here in that the 
> SYSTEM.STATS table has a custom split policy that prevents splitting from 
> occurring (PHOENIX-4700). We'll get a fix out in 4.14, but in the meantime 
> it's safe to split the table, as long as all stats for a given table are on 
> the same region.
> 
> James
> 
> On Fri, Apr 20, 2018 at 1:37 PM, James Taylor <jamestay...@apache.org 
> <mailto:jamestay...@apache.org>> wrote:
> Thanks for bringing this to our attention. There's a bug here in that the 
> SYSTEM.STATS
> 
> On Wed, Apr 18, 2018 at 9:59 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> I've discovered that SYSTEM.STATS has only 1 region with size 3.25 GB. Is it 
> ok to split it and distribute over different region servers?
> 
> 



Split and distribute regions of SYSTEM.STATS table

2018-04-18 Thread Batyrshin Alexander
 Hello,
I've discovered that SYSTEM.STATS has only 1 region with size 3.25 GB. Is it ok 
to split it and distribute over different region servers?

Re: Delay between put from HBase shell and result in SELECT from Phoenix

2017-08-25 Thread Batyrshin Alexander
Yep, our test HBase cluster was misconfigured (ntpd was disabled). After time 
synchronisation I don't observe any delay between shell put and Phoenix select.

> On 25 Aug 2017, at 20:54, James Taylor <jamestay...@apache.org> wrote:
> 
> Phoenix retrieves the server timestamp from the region server that hosts the 
> system catalog table and uses that as the timestamp of the puts when you do 
> an UPSERT VALUE (FYI, this behavior will change in 4.12 and we'll use latest 
> timestamp everywhere). I suspect the puts you're doing are going to a 
> different region server and the clocks on the servers in your cluster are not 
> synchronized.
> 
> If that's the case, the best option is to make sure your clocks are 
> synchronized as that'll prevent other weird, unexpected behavior. If that's 
> not an option one workaround would be to set the CURRENT_SCN property on your 
> connection to HConstants.LATEST_TIMESTAMP like this:
> 
> props.put(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
> Long.toString(HContants.LATEST_TIMESTAMP));
> conn = DriverManager.getConnection(getUrl(), props);
> 
> 
> 
> 
> On Fri, Aug 25, 2017 at 10:14 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Its coming from scan time ragne. If i run sqlline with 'currentSCN' from the 
> future then select retrieve fresh data immediatly. 
> 
> We already have software that write with HBase API. Now we build client that 
> works with data from HBase via Phoenix.
> 
> 
> 
>> On 25 Aug 2017, at 19:35, Josh Elser <els...@apache.org 
>> <mailto:els...@apache.org>> wrote:
>> 
>> Calls to put in the HBase shell, to the best of my knowledge, are 
>> synchronous. You should not have control returned to you until the update 
>> was committed by the RegionServers. HBase's data guarantees are that once a 
>> call to write data returns to you, all other readers *must* be able to see 
>> that update.
>> 
>> I'm not sure where this 3-5 second delay you describe is coming form.
>> 
>> Regardless, why are you writing data to HBase directly and circumventing the 
>> APIs to write data via Phoenix? If you want to access your data via Phoenix, 
>> you're going to run into less pain if you work completely at the Phoenix API 
>> level, (tl;dr use UPSERT to write data)
>> 
>> On 8/24/17 2:58 PM, Batyrshin Alexander wrote:
>>> Here is example:
>>> CREATE TABLE IF NOT EXISTS test (
>>>   k VARCHAR NOT NULL,
>>>   v VARCHAR,
>>>   CONSTRAINT my_pk PRIMARY KEY (k)
>>> );
>>> 0: jdbc:phoenix:> upsert into test(k,v) values ('1', 'a');
>>> 1 row affected (0.042 seconds)
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | a  |
>>> +++
>>> Then:
>>> hbase(main):014:0> put 'TEST', '1', '0:V', 'b'
>>> 0 row(s) in 0.0100 seconds
>>> Result in phoenix will be available after ~ 3-5 seconds:
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | a  |
>>> +++
>>> 1 row selected (0.015 seconds)
>>> ... 5 seconds later
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | b  |
>>> +++
>>> 1 row selected (0.026 seconds)
>>>> On 24 Aug 2017, at 21:38, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>>> wrote:
>>>> 
>>>>  Hello,
>>>> 
>>>> How to decrease or even eliminate delay between direct HBase put (for 
>>>> example from HBase shell) and SELECT from Phoenix?
>>>> 
>>>> My table has only 1 VERSION and do not use any block cache ( {NAME => 
>>>> 'invoice', COMPRESSION => 'LZO', BLOCKCACHE => 'false'} ), so i do not 
>>>> understand where previous value for SELECT come from.
> 
> 



Re: Delay between put from HBase shell and result in SELECT from Phoenix

2017-08-25 Thread Batyrshin Alexander
I've already tested currentSCN and I can confirm that delays are gone.

Going to check clocks on cluster nodes...


> On 25 Aug 2017, at 20:54, James Taylor <jamestay...@apache.org> wrote:
> 
> Phoenix retrieves the server timestamp from the region server that hosts the 
> system catalog table and uses that as the timestamp of the puts when you do 
> an UPSERT VALUE (FYI, this behavior will change in 4.12 and we'll use latest 
> timestamp everywhere). I suspect the puts you're doing are going to a 
> different region server and the clocks on the servers in your cluster are not 
> synchronized.
> 
> If that's the case, the best option is to make sure your clocks are 
> synchronized as that'll prevent other weird, unexpected behavior. If that's 
> not an option one workaround would be to set the CURRENT_SCN property on your 
> connection to HConstants.LATEST_TIMESTAMP like this:
> 
> props.put(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
> Long.toString(HContants.LATEST_TIMESTAMP));
> conn = DriverManager.getConnection(getUrl(), props);
> 
> 
> 
> 
> On Fri, Aug 25, 2017 at 10:14 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Its coming from scan time ragne. If i run sqlline with 'currentSCN' from the 
> future then select retrieve fresh data immediatly. 
> 
> We already have software that write with HBase API. Now we build client that 
> works with data from HBase via Phoenix.
> 
> 
> 
>> On 25 Aug 2017, at 19:35, Josh Elser <els...@apache.org 
>> <mailto:els...@apache.org>> wrote:
>> 
>> Calls to put in the HBase shell, to the best of my knowledge, are 
>> synchronous. You should not have control returned to you until the update 
>> was committed by the RegionServers. HBase's data guarantees are that once a 
>> call to write data returns to you, all other readers *must* be able to see 
>> that update.
>> 
>> I'm not sure where this 3-5 second delay you describe is coming form.
>> 
>> Regardless, why are you writing data to HBase directly and circumventing the 
>> APIs to write data via Phoenix? If you want to access your data via Phoenix, 
>> you're going to run into less pain if you work completely at the Phoenix API 
>> level, (tl;dr use UPSERT to write data)
>> 
>> On 8/24/17 2:58 PM, Batyrshin Alexander wrote:
>>> Here is example:
>>> CREATE TABLE IF NOT EXISTS test (
>>>   k VARCHAR NOT NULL,
>>>   v VARCHAR,
>>>   CONSTRAINT my_pk PRIMARY KEY (k)
>>> );
>>> 0: jdbc:phoenix:> upsert into test(k,v) values ('1', 'a');
>>> 1 row affected (0.042 seconds)
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | a  |
>>> +++
>>> Then:
>>> hbase(main):014:0> put 'TEST', '1', '0:V', 'b'
>>> 0 row(s) in 0.0100 seconds
>>> Result in phoenix will be available after ~ 3-5 seconds:
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | a  |
>>> +++
>>> 1 row selected (0.015 seconds)
>>> ... 5 seconds later
>>> 0: jdbc:phoenix:> select * from test;
>>> +++
>>> | K  | V  |
>>> +++
>>> | 1  | b  |
>>> +++
>>> 1 row selected (0.026 seconds)
>>>> On 24 Aug 2017, at 21:38, Batyrshin Alexander <0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
>>>> <mailto:0x62...@gmail.com>>> wrote:
>>>> 
>>>>  Hello,
>>>> 
>>>> How to decrease or even eliminate delay between direct HBase put (for 
>>>> example from HBase shell) and SELECT from Phoenix?
>>>> 
>>>> My table has only 1 VERSION and do not use any block cache ( {NAME => 
>>>> 'invoice', COMPRESSION => 'LZO', BLOCKCACHE => 'false'} ), so i do not 
>>>> understand where previous value for SELECT come from.
> 
> 



Re: Delay between put from HBase shell and result in SELECT from Phoenix

2017-08-25 Thread Batyrshin Alexander
Its coming from scan time ragne. If i run sqlline with 'currentSCN' from the 
future then select retrieve fresh data immediatly. 

We already have software that write with HBase API. Now we build client that 
works with data from HBase via Phoenix.


> On 25 Aug 2017, at 19:35, Josh Elser <els...@apache.org> wrote:
> 
> Calls to put in the HBase shell, to the best of my knowledge, are 
> synchronous. You should not have control returned to you until the update was 
> committed by the RegionServers. HBase's data guarantees are that once a call 
> to write data returns to you, all other readers *must* be able to see that 
> update.
> 
> I'm not sure where this 3-5 second delay you describe is coming form.
> 
> Regardless, why are you writing data to HBase directly and circumventing the 
> APIs to write data via Phoenix? If you want to access your data via Phoenix, 
> you're going to run into less pain if you work completely at the Phoenix API 
> level, (tl;dr use UPSERT to write data)
> 
> On 8/24/17 2:58 PM, Batyrshin Alexander wrote:
>> Here is example:
>> CREATE TABLE IF NOT EXISTS test (
>>   k VARCHAR NOT NULL,
>>   v VARCHAR,
>>   CONSTRAINT my_pk PRIMARY KEY (k)
>> );
>> 0: jdbc:phoenix:> upsert into test(k,v) values ('1', 'a');
>> 1 row affected (0.042 seconds)
>> 0: jdbc:phoenix:> select * from test;
>> +++
>> | K  | V  |
>> +++
>> | 1  | a  |
>> +++
>> Then:
>> hbase(main):014:0> put 'TEST', '1', '0:V', 'b'
>> 0 row(s) in 0.0100 seconds
>> Result in phoenix will be available after ~ 3-5 seconds:
>> 0: jdbc:phoenix:> select * from test;
>> +++
>> | K  | V  |
>> +++
>> | 1  | a  |
>> +++
>> 1 row selected (0.015 seconds)
>> ... 5 seconds later
>> 0: jdbc:phoenix:> select * from test;
>> +++
>> | K  | V  |
>> +++
>> | 1  | b  |
>> +++
>> 1 row selected (0.026 seconds)
>>> On 24 Aug 2017, at 21:38, Batyrshin Alexander <0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com> <mailto:0x62...@gmail.com 
>>> <mailto:0x62...@gmail.com>>> wrote:
>>> 
>>>  Hello,
>>> 
>>> How to decrease or even eliminate delay between direct HBase put (for 
>>> example from HBase shell) and SELECT from Phoenix?
>>> 
>>> My table has only 1 VERSION and do not use any block cache ( {NAME => 
>>> 'invoice', COMPRESSION => 'LZO', BLOCKCACHE => 'false'} ), so i do not 
>>> understand where previous value for SELECT come from.



Re: Delay between put from HBase shell and result in SELECT from Phoenix

2017-08-24 Thread Batyrshin Alexander
Here is example:

CREATE TABLE IF NOT EXISTS test (
  k VARCHAR NOT NULL,
  v VARCHAR,
  CONSTRAINT my_pk PRIMARY KEY (k)
);

0: jdbc:phoenix:> upsert into test(k,v) values ('1', 'a');
1 row affected (0.042 seconds)
0: jdbc:phoenix:> select * from test;
+++
| K  | V  |
+++
| 1  | a  |
+++


Then:

hbase(main):014:0> put 'TEST', '1', '0:V', 'b'
0 row(s) in 0.0100 seconds

Result in phoenix will be available after ~ 3-5 seconds:

0: jdbc:phoenix:> select * from test;
+++
| K  | V  |
+++
| 1  | a  |
+++
1 row selected (0.015 seconds)

... 5 seconds later

0: jdbc:phoenix:> select * from test;
+++
| K  | V  |
+++
| 1  | b  |
+++
1 row selected (0.026 seconds)


> On 24 Aug 2017, at 21:38, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
>  Hello,
> 
> How to decrease or even eliminate delay between direct HBase put (for example 
> from HBase shell) and SELECT from Phoenix?
> 
> My table has only 1 VERSION and do not use any block cache ( {NAME => 
> 'invoice', COMPRESSION => 'LZO', BLOCKCACHE => 'false'} ), so i do not 
> understand where previous value for SELECT come from.



Delay between put from HBase shell and result in SELECT from Phoenix

2017-08-24 Thread Batyrshin Alexander
 Hello,

How to decrease or even eliminate delay between direct HBase put (for example 
from HBase shell) and SELECT from Phoenix?

My table has only 1 VERSION and do not use any block cache ( {NAME => 
'invoice', COMPRESSION => 'LZO', BLOCKCACHE => 'false'} ), so i do not 
understand where previous value for SELECT come from.

Re: Metrics and Phoenix

2017-07-26 Thread Batyrshin Alexander

> On 26 Jul 2017, at 12:49, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
>  Hello,
> Im collecting metrics from region servers - 
> readRequestCount/writeRequestCount.
> The problem is that Phoenix SELECT is not affects 
> readRequestCount/writeRequestCount.
> Is this normal? If yes then which HBase metrics are affected by SELECT 
> statements?


I should be more specific. Im issuing 'SELECT count(*)' statement

Metrics and Phoenix

2017-07-26 Thread Batyrshin Alexander
  Hello,
Im collecting metrics from region servers - readRequestCount/writeRequestCount.
The problem is that Phoenix SELECT is not affects 
readRequestCount/writeRequestCount.
Is this normal? If yes then which HBase metrics are affected by SELECT 
statements?

Re: How to recover SYSTEM.STATS?

2017-07-22 Thread Batyrshin Alexander
Thank you

> On 23 Jul 2017, at 06:09, venk sham <shamv...@gmail.com> wrote:
> 
> Running major compact will build stats to some extent, and as you keep using 
> tables this will get populated
> 
> On Jul 22, 2017 7:40 PM, "Batyrshin Alexander" <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> We accidentally lost SYSTEM.STATS. How to recover/recreate it?



How to recover SYSTEM.STATS?

2017-07-22 Thread Batyrshin Alexander
 Hello,
We accidentally lost SYSTEM.STATS. How to recover/recreate it?


Re: Cant run map-reduce index builder because my view/idx is lower case

2017-06-22 Thread Batyrshin Alexander
 Great, i will try.
> On 23 Jun 2017, at 00:24, Sergey Soldatov <sergeysolda...@gmail.com> wrote:
> 
> You may try to build Phoenix with patch from  PHOENIX-3710 
> <https://issues.apache.org/jira/browse/PHOENIX-3710> applied.That should fix 
> the problem, I believe. 
> Thanks,
> Sergey 
> 
> On Mon, Jun 19, 2017 at 11:28 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
> Hello again,
> 
> Could you, please, help me to run map-reduce for indexing view with 
> lower-case name?
> 
> Here is my test try on Phoenix-4.8.2:
> 
> CREATE TABLE "table" (
> c1 varchar,
> c2 varchar,
> c3 varchar
> CONSTRAINT pk PRIMARY KEY (c1,c2,c3)
> )
> 
> CREATE VIEW "table_view"
> AS SELECT * FROM "table" WHERE c3 = 'X';
> 
> CREATE INDEX "table_view_idx" ON "table_view" (c2, c1) ASYNC;
> 
> sudo -u hadoop ./bin/hbase org.apache.phoenix.mapreduce.index.IndexTool 
> --data-table '"table_view"' --index-table '"table_view_idx"' --output-path 
> ASYNC_IDX_HFILES
> 
> 2017-06-19 21:27:17,716 ERROR [main] index.IndexTool: An exception occurred 
> while performing the indexing job: IllegalArgumentException:  TABLE_VIEW_IDX 
> is not an index table for TABLE_VIEW  at:
> java.lang.IllegalArgumentException:  TABLE_VIEW_IDX is not an index table for 
> TABLE_VIEW
> at 
> org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:190)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:394)
> 
> 
>> On 17 Jun 2017, at 03:55, Batyrshin Alexander <0x62...@gmail.com 
>> <mailto:0x62...@gmail.com>> wrote:
>> 
>>  Hello,
>> Im trying to build ASYNC index by example from 
>> https://phoenix.apache.org/secondary_indexing.html 
>> <https://phoenix.apache.org/secondary_indexing.html>
>> My issues is that my view name and index name is lower case, so map-reduce 
>> rise error:
>> 
>> 2017-06-17 03:45:56,506 ERROR [main] index.IndexTool: An exception occurred 
>> while performing the indexing job: IllegalArgumentException:  
>> INVOICES_V4_INDEXED_FUZZY_IDX is not an index table for 
>> INVOICES_V4_INDEXED_FUZZY
>> 
> 
> 



Re: Cant run map-reduce index builder because my view/idx is lower case

2017-06-19 Thread Batyrshin Alexander
Hello again,

Could you, please, help me to run map-reduce for indexing view with lower-case 
name?

Here is my test try on Phoenix-4.8.2:

CREATE TABLE "table" (
c1 varchar,
c2 varchar,
c3 varchar
CONSTRAINT pk PRIMARY KEY (c1,c2,c3)
)

CREATE VIEW "table_view"
AS SELECT * FROM "table" WHERE c3 = 'X';

CREATE INDEX "table_view_idx" ON "table_view" (c2, c1) ASYNC;

sudo -u hadoop ./bin/hbase org.apache.phoenix.mapreduce.index.IndexTool 
--data-table '"table_view"' --index-table '"table_view_idx"' --output-path 
ASYNC_IDX_HFILES

2017-06-19 21:27:17,716 ERROR [main] index.IndexTool: An exception occurred 
while performing the indexing job: IllegalArgumentException:  TABLE_VIEW_IDX is 
not an index table for TABLE_VIEW  at:
java.lang.IllegalArgumentException:  TABLE_VIEW_IDX is not an index table for 
TABLE_VIEW
at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:190)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:394)


> On 17 Jun 2017, at 03:55, Batyrshin Alexander <0x62...@gmail.com> wrote:
> 
>  Hello,
> Im trying to build ASYNC index by example from 
> https://phoenix.apache.org/secondary_indexing.html 
> <https://phoenix.apache.org/secondary_indexing.html>
> My issues is that my view name and index name is lower case, so map-reduce 
> rise error:
> 
> 2017-06-17 03:45:56,506 ERROR [main] index.IndexTool: An exception occurred 
> while performing the indexing job: IllegalArgumentException:  
> INVOICES_V4_INDEXED_FUZZY_IDX is not an index table for 
> INVOICES_V4_INDEXED_FUZZY
> 



Cant run map-reduce index builder because my view/idx is lower case

2017-06-16 Thread Batyrshin Alexander
 Hello,
Im trying to build ASYNC index by example from 
https://phoenix.apache.org/secondary_indexing.html 

My issues is that my view name and index name is lower case, so map-reduce rise 
error:

2017-06-17 03:45:56,506 ERROR [main] index.IndexTool: An exception occurred 
while performing the indexing job: IllegalArgumentException:  
INVOICES_V4_INDEXED_FUZZY_IDX is not an index table for 
INVOICES_V4_INDEXED_FUZZY



Re: Phoenix index update on direct HBase row update

2017-06-15 Thread Batyrshin Alexander
What metadata is needed? Maybe you point me to example or to related source 
code of Phoenix API?

> On 16 Jun 2017, at 03:41, James Taylor <jamestay...@apache.org> wrote:
> 
> No, not unless you write the code that keeps the index in sync yourself, 
> attach the metadata needed by index maintenance coprocessor in your update 
> code, or use the Phoenix update APIs which do this for you.
> 
> On Thu, Jun 15, 2017 at 5:29 PM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  We updates our HBase table directly without Phoenix. Is it possible to make 
> indexes keep in sync with this updates?
> 



Phoenix index update on direct HBase row update

2017-06-15 Thread Batyrshin Alexander
 We updates our HBase table directly without Phoenix. Is it possible to make 
indexes keep in sync with this updates?

Re: Partial indexes

2017-06-08 Thread Batyrshin Alexander
Oh, cool.
Thank you.

> On 8 Jun 2017, at 17:52, James Taylor <jamestay...@apache.org> wrote:
> 
> Hi Alexander,
> We support indexes on views which is essentially what you're asking for.
> Thanks,
> James
> 
> On Thu, Jun 8, 2017 at 1:28 AM Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> Is there any plans to implement partial indexes (CREATE INDEX  WHERE 
> predicate)?



Partial indexes

2017-06-08 Thread Batyrshin Alexander
 Hello,
Is there any plans to implement partial indexes (CREATE INDEX  WHERE 
predicate)?

Re: Row timestamp

2017-03-11 Thread Batyrshin Alexander
So main idea behind of "Row Timestamp" feature is to give ability to set HBase 
cell timestamp via UPSERT?
Is it possible to get cell timestamp for already created HBase table with row 
keys without timestamp?

As for example. I tried to execute query from page:

0: jdbc:phoenix:> CREATE TABLE DESTINATION_METRICS_TABLE (CREATED_DATE NOT NULL 
DATE, METRIC_ID NOT NULL CHAR(15), METRIC_VALUE LONG CONSTRAINT PK PRIMARY 
KEY(CREATED_DATE ROW_TIMESTAMP, METRIC_ID)) SALT_BUCKETS = 8;
Error: ERROR 601 (42P00): Syntax error. Encountered "NOT" at line 1, column 54. 
(state=42P00,code=601)

Fixed query is: CREATE TABLE DESTINATION_METRICS_TABLE (CREATED_DATE DATE NOT 
NULL , METRIC_ID CHAR(15) NOT NULL , METRIC_VALUE UNSIGNED_LONG CONSTRAINT PK 
PRIMARY KEY(CREATED_DATE ROW_TIMESTAMP, METRIC_ID)) SALT_BUCKETS = 8;

> On 10 Mar 2017, at 19:39, Samarth Jain <samarth.j...@gmail.com> wrote:
> 
> This is because you are using now() for created. If you used a different date 
> then with TEST_ROW_TIMESTAMP1, the cell timestamp would be that date where as 
> with TEST_ROW_TIMESTAMP2 it would be the server side time.
> 
> Also, which examples are broken on the page?
> 
> On Thu, Mar 9, 2017 at 11:28 AM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>  Hello,
> Im trying to understand what excatly Phoenix row timestamp is
> I created 2 tables for test:
> 
> CREATE TABLE test_row_timestamp1(
> id varchar NOT NULL,
> created TIMESTAMP NOT NULL,
> foo varchar,
> CONSTRAINT PK PRIMARY KEY( id, created ROW_TIMESTAMP )
> )
> 
> CREATE TABLE test_row_timestamp2(
> id varchar NOT NULL,
> created TIMESTAMP NOT NULL,
> foo varchar,
> CONSTRAINT PK PRIMARY KEY( id, created )
> )
> 
> upsert into test_row_timestamp1 (id, created, foo) values ('1', now(), 'bar');
> upsert into test_row_timestamp2 (id, created, foo) values ('1', now(), 'bar');
> 
> And result is:
> 
> hbase(main):004:0> scan 'TEST_ROW_TIMESTAMP1', { LIMIT=>10}
> ROW  COLUMN+CELL
>  1\x00\x80\x00\x01Z\xB4\x80:6\x00\x00\x00\x00
> column=0:FOO, timestamp=1489086986806, value=bar
>  1\x00\x80\x00\x01Z\xB4\x80:6\x00\x00\x00\x00column=0:_0, 
> timestamp=1489086986806, value=x
> 
> hbase(main):005:0> scan 'TEST_ROW_TIMESTAMP2', { LIMIT=>10}
> ROW  COLUMN+CELL
>  1\x00\x80\x00\x01Z\xB4\x80M\xE6\x00\x00\x00\x00 
> column=0:FOO, timestamp=1489086991848, value=bar
>  1\x00\x80\x00\x01Z\xB4\x80M\xE6\x00\x00\x00\x00 column=0:_0, 
> timestamp=1489086991848, value=x
> 
> Both tables has the same row key pattern id + 0x00 + timestamp
> I expect that test_row_timestamp1 will utilise native hbase timestamp that is 
> part of "real" hbase key.
> 
> 
> PS. Examples at https://phoenix.apache.org/rowtimestamp.html 
> <https://phoenix.apache.org/rowtimestamp.html> are broken
> 



Row timestamp

2017-03-09 Thread Batyrshin Alexander
 Hello,
Im trying to understand what excatly Phoenix row timestamp is
I created 2 tables for test:

CREATE TABLE test_row_timestamp1(
id varchar NOT NULL,
created TIMESTAMP NOT NULL,
foo varchar,
CONSTRAINT PK PRIMARY KEY( id, created ROW_TIMESTAMP )
)

CREATE TABLE test_row_timestamp2(
id varchar NOT NULL,
created TIMESTAMP NOT NULL,
foo varchar,
CONSTRAINT PK PRIMARY KEY( id, created )
)

upsert into test_row_timestamp1 (id, created, foo) values ('1', now(), 'bar');
upsert into test_row_timestamp2 (id, created, foo) values ('1', now(), 'bar');

And result is:

hbase(main):004:0> scan 'TEST_ROW_TIMESTAMP1', { LIMIT=>10}
ROW  COLUMN+CELL
 1\x00\x80\x00\x01Z\xB4\x80:6\x00\x00\x00\x00column=0:FOO, 
timestamp=1489086986806, value=bar
 1\x00\x80\x00\x01Z\xB4\x80:6\x00\x00\x00\x00column=0:_0, 
timestamp=1489086986806, value=x

hbase(main):005:0> scan 'TEST_ROW_TIMESTAMP2', { LIMIT=>10}
ROW  COLUMN+CELL
 1\x00\x80\x00\x01Z\xB4\x80M\xE6\x00\x00\x00\x00 column=0:FOO, 
timestamp=1489086991848, value=bar
 1\x00\x80\x00\x01Z\xB4\x80M\xE6\x00\x00\x00\x00 column=0:_0, 
timestamp=1489086991848, value=x

Both tables has the same row key pattern id + 0x00 + timestamp
I expect that test_row_timestamp1 will utilise native hbase timestamp that is 
part of "real" hbase key.


PS. Examples at https://phoenix.apache.org/rowtimestamp.html 
 are broken

Re: How to recreate table?

2017-01-16 Thread Batyrshin Alexander
I've discovered that i can simple delete schema 
Like this:
delete from SYSTEM.CATALOG where "TABLE_NAME" = 'my_table_name';
Is this action has any consequence?

> On 16 Jan 2017, at 19:51, Josh Elser <josh.el...@gmail.com> wrote:
> 
> You could create a new table with the same schema and then flip the 
> underlying table out.
> 
> * Rename the existing table to "foo"
> * Create your table via Phoenix with correct schema and desired name
> * Delete underlying HBase table that Phoenix created
> * Rename "foo" to the desired name
> 
> I _think_ that would work.
> 
> Batyrshin Alexander wrote:
>>  Hello,
>> I've recreated HBase table with data, but phoenix doesn't work on it. But i 
>> still see this table in phoenix.
>> How can I recreate pheonix table now?
>> As I know "drop table ... ; create table ..." in phoenix will destroy my 
>> HBase table with data.



How to recreate table?

2017-01-16 Thread Batyrshin Alexander
 Hello,
I've recreated HBase table with data, but phoenix doesn't work on it. But i 
still see this table in phoenix.
How can I recreate pheonix table now?
As I know "drop table ... ; create table ..." in phoenix will destroy my HBase 
table with data.