[jira] [Updated] (PHOENIX-5140) Index Tool with schema table undefined

2019-02-13 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

张延召 updated PHOENIX-5140:
-
Description: 
First I create the table and insert the data:

create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
 Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)
 at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)
 at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)
 at 
org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)
 at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)
 ... 9 more

I can query this table and have access to it,It works well:

select * from DMP.DMP_INDEX_TEST2;
 select * from DMP.TMP_INDEX_DMP_TEST2;
 drop table DMP.DMP_INDEX_TEST2;

 

But why did my MR task make this mistake? Any Suggestions from anyone

  was:
First I create the table and insert the data:


create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');


The asynchronous index is then created:


create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;


Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;


HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:


Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at 

[jira] [Created] (PHOENIX-5140) Index Tool with schema table undefined

2019-02-13 Thread JIRA
张延召 created PHOENIX-5140:


 Summary: Index Tool with schema table undefined
 Key: PHOENIX-5140
 URL: https://issues.apache.org/jira/browse/PHOENIX-5140
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
 Environment: My HDP version is 3.0.0.0, HBase version is 2.0.0,phoenix 
version is 5.0.0 and hadoop version is 3.1.0
Reporter: 张延召


First I create the table and insert the data:


create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');


The asynchronous index is then created:


create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;


Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;


HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:


Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)
 at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)
 at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)
 at 
org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)
 at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)
 ... 9 more


I can query this table and have access to it,It works well:


select count(*) from DMP.DMP_INDEX_TEST2;
select count(*) from DMP.TMP_INDEX_DMP_TEST2;
drop table DMP.DMP_INDEX_TEST2;

 

But why did my MR task make this mistake? Any Suggestions from anyone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5139:
-

Assignee: Vincent Poon

> PhoenixDriver lockInterruptibly usage could unlock without locking
> --
>
> Key: PHOENIX-5139
> URL: https://issues.apache.org/jira/browse/PHOENIX-5139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5139.4.x-HBase-1.4.v1.patch
>
>
> We have calls to lockInterruptibly surrounded by a finally call to unlock, 
> but there's a chance InterruptedException was thrown and we didn't obtain the 
> lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5139:
--
Attachment: PHOENIX-5139.4.x-HBase-1.4.v1.patch

> PhoenixDriver lockInterruptibly usage could unlock without locking
> --
>
> Key: PHOENIX-5139
> URL: https://issues.apache.org/jira/browse/PHOENIX-5139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5139.4.x-HBase-1.4.v1.patch
>
>
> We have calls to lockInterruptibly surrounded by a finally call to unlock, 
> but there's a chance InterruptedException was thrown and we didn't obtain the 
> lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5139:
-

 Summary: PhoenixDriver lockInterruptibly usage could unlock 
without locking
 Key: PHOENIX-5139
 URL: https://issues.apache.org/jira/browse/PHOENIX-5139
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.1.0
Reporter: Vincent Poon


We have calls to lockInterruptibly surrounded by a finally call to unlock, but 
there's a chance InterruptedException was thrown and we didn't obtain the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5069) Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-02-13 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-5069:
-
Attachment: PHOENIX-5069.4.x-HBase-1.3.001.patch

> Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache
> ---
>
> Key: PHOENIX-5069
> URL: https://issues.apache.org/jira/browse/PHOENIX-5069
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Attachments: PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.001.patch, 
> PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.002.patch, 
> PHOENIX-5069.4.x-HBase-1.3.001.patch, PHOENIX-5069.4.x-HBase-1.4.001.patch, 
> PHOENIX-5069.master.001.patch, PHOENIX-5069.master.002.patch, 
> PHOENIX-5069.master.003.patch, PHOENIX-5069.master.004.patch, 
> PHOENIX-5069.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> The current Phoenix Stats Cache uses TTL based eviction policy. A cached 
> entry will expire after a given amount of time (900s by default) passed since 
> the entry's been created. This will lead to cache miss when 
> Compiler/Optimizer fetches stats from cache at the next time. As you can see 
> from the above graph, fetching stats from the cache is a blocking operation — 
> when there is cache miss, it has a round trip over the wire to scan the 
> SYSTEM.STATS Table and to get the latest stats info, rebuild the cache and 
> finally return the stats to the Compiler/Optimizer. Whenever there is a cache 
> miss, this blocking call causes significant performance penalty and see 
> periodic spikes.
> *This Jira suggests to use asynchronous refresh mechanism to provide a 
> non-blocking cache. For details, please see the linked design document below.*
> [~karanmehta93] [~twdsi...@gmail.com] [~dbwong] [~elserj] [~an...@apache.org] 
> [~sergey soldatov] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5089) IndexScrutinyTool should be able to analyze tenant-owned indexes

2019-02-13 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-5089:


Assignee: Gokcen Iskender  (was: Geoffrey Jacoby)

> IndexScrutinyTool should be able to analyze tenant-owned indexes
> 
>
> Key: PHOENIX-5089
> URL: https://issues.apache.org/jira/browse/PHOENIX-5089
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Gokcen Iskender
>Priority: Major
>
> IndexScrutiny uses global connections to lookup the indexes which it's asked 
> to analyze, which means that it won't be able to see indexes owned by tenant 
> views. We should add an optional tenantId parameter to it that will use a 
> tenant connection (and potentially our MapReduce framework's tenant 
> connection support) to allow for analyzing those indexes as well.
> This is similar to PHOENIX-4940 for the index rebuild tool.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3710) Cannot use lowername data table name with indextool

2019-02-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-3710:
---

Assignee: Josh Elser  (was: Sergey Soldatov)

> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Assignee: Josh Elser
>Priority: Minor
> Attachments: PHOENIX-3710.patch, test.sh, test.sql
>
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5138) ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it

2019-02-13 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5138:
-
Description: 
PHOENIX-5132 changed the ViewIndexId generation logic to use one sequence per 
physical view index table, whereas before it had been tenant + physical table. 
This removed the possibility of a tenant view index and a global view index 
having colliding ViewIndexIds.

However, existing Phoenix environments may have already created tenant-owned 
view index ids using the old sequence, and under PHOENIX-5132 if they create 
another, its ViewIndexId will got back to MIN_VALUE, which could cause a 
collision with an existing view index id. 



> ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones 
> created before it
> --
>
> Key: PHOENIX-5138
> URL: https://issues.apache.org/jira/browse/PHOENIX-5138
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> PHOENIX-5132 changed the ViewIndexId generation logic to use one sequence per 
> physical view index table, whereas before it had been tenant + physical 
> table. This removed the possibility of a tenant view index and a global view 
> index having colliding ViewIndexIds.
> However, existing Phoenix environments may have already created tenant-owned 
> view index ids using the old sequence, and under PHOENIX-5132 if they create 
> another, its ViewIndexId will got back to MIN_VALUE, which could cause a 
> collision with an existing view index id. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5138) ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it

2019-02-13 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-5138:


 Summary: ViewIndexId sequences created after PHOENIX-5132 
shouldn't collide with ones created before it
 Key: PHOENIX-5138
 URL: https://issues.apache.org/jira/browse/PHOENIX-5138
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Summary: Index Rebuilder scan increases data table region split time  (was: 
Index Rebuilder blocks data table region split)

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva]
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva]

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5137) Index Rebuild blocks data table region split

2019-02-13 Thread Kiran Kumar Maturi (JIRA)
Kiran Kumar Maturi created PHOENIX-5137:
---

 Summary: Index Rebuild blocks data table region split
 Key: PHOENIX-5137
 URL: https://issues.apache.org/jira/browse/PHOENIX-5137
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Kiran Kumar Maturi
Assignee: Kiran Kumar Maturi


[~lhofhansl] [~vincentpoon] [~tdsilva]

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing irrespective of the 
blockingMemstoreSize
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder blocks data table region split

2019-02-13 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Summary: Index Rebuilder blocks data table region split  (was: Index 
Rebuild blocks data table region split)

> Index Rebuilder blocks data table region split
> --
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> [~lhofhansl] [~vincentpoon] [~tdsilva]
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing irrespective of the 
> blockingMemstoreSize
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)