[jira] [Updated] (PHOENIX-5145) GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)

2019-02-18 Thread MariaCarrie (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MariaCarrie updated PHOENIX-5145:
-
Description: 
I can successfully read the data using the local mode. Here is my code:

^val sqlContext: SQLContext = missionSession.app.ss.sqlContext^
 ^System.setProperty("sun.security.krb5.debug", "true")^
 ^System.setProperty("sun.security.spnego.debug", "true")^
 ^UserGroupInformation.loginUserFromKeytab("d...@devdip.org", "devdmp.keytab")^
 ^// Load as a DataFrame directly using a Configuration object^
 ^val df: DataFrame = 
sqlContext.phoenixTableAsDataFrame(missionSession.config.tableName, Seq("ID"), 
zkUrl = Some(missionSession.config.zkUrl))^
 ^df.show(5)^

But when I submit this to YARN for execution, an exception will be thrown. The 
following is the exception information:

^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:54 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:54 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:55 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:07:57 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
 ^Tue Feb 19 13:08:01 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
 ^Tue Feb 19 13:08:11 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
 ^Tue Feb 19 13:08:21 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
 ^Tue Feb 19 13:08:31 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
 ^Tue Feb 19 13:08:41 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
 ^Tue Feb 19 13:09:01 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: 

[jira] [Created] (PHOENIX-5145) GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)

2019-02-18 Thread MariaCarrie (JIRA)
MariaCarrie created PHOENIX-5145:


 Summary: GSSException: No valid credentials provided (Mechanism 
level: Failed to find any Kerberos tgt) 
 Key: PHOENIX-5145
 URL: https://issues.apache.org/jira/browse/PHOENIX-5145
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
 Environment: >HDP 3.0.0

>Phoenix 5.0.0

>HBase 2.0.0

>Spark 2.3.1

>Hadoop 3.0.1
Reporter: MariaCarrie


I can successfully read the data using the local mode. Here is my code:

^val sqlContext: SQLContext = missionSession.app.ss.sqlContext^
^System.setProperty("sun.security.krb5.debug", "true")^
^System.setProperty("sun.security.spnego.debug", "true")^
^UserGroupInformation.loginUserFromKeytab("d...@devdip.org", "devdmp.keytab")^
^// Load as a DataFrame directly using a Configuration object^
^val df: DataFrame = 
sqlContext.phoenixTableAsDataFrame(missionSession.config.tableName, Seq("ID"), 
zkUrl = Some(missionSession.config.zkUrl))^
^df.show(5)^

But when I submit this to YARN for execution, an exception will be thrown. The 
following is the exception information:

^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:53 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:54 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:54 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:55 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:07:57 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: java.io.IOException: Can not send request because relogin 
is in progress.^
^Tue Feb 19 13:08:01 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
^Tue Feb 19 13:08:11 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
^Tue Feb 19 13:08:21 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
^Tue Feb 19 13:08:31 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)]^
^Tue Feb 19 13:08:41 CST 2019, 
RpcRetryingCaller\{globalStartTime=1550552873361, pause=100, maxAttempts=36}, 
java.io.IOException: Call to test-dmp5.fengdai.org/10.200.162.26:16020 failed 
on local exception: javax.security.sasl.SaslException: GSS initiate failed 
[Caused by GSSException: No valid credentials 

[jira] [Created] (PHOENIX-5144) C++ JDBC Driver

2019-02-18 Thread yinghua_zh (JIRA)
yinghua_zh created PHOENIX-5144:
---

 Summary: C++ JDBC Driver
 Key: PHOENIX-5144
 URL: https://issues.apache.org/jira/browse/PHOENIX-5144
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.1
Reporter: yinghua_zh


Can you provide a C++ version of JDBC driver? 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5143) Support Range Scan for all columns with secondary global index without creating covered index

2019-02-18 Thread Kshitij Kulshrestha (JIRA)
Kshitij Kulshrestha created PHOENIX-5143:


 Summary: Support Range Scan for all columns with secondary global 
index without creating covered index
 Key: PHOENIX-5143
 URL: https://issues.apache.org/jira/browse/PHOENIX-5143
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0
Reporter: Kshitij Kulshrestha


 
{code:java}

CREATE TABLE IF NOT EXISTS INDEX_OPT (
MAIN_KEY VARCHAR(32) NOT NULL,
ALERT_ID VARCHAR(32),
ALERT_TYPE VARCHAR(32)
CONSTRAINT PK PRIMARY KEY (MAIN_KEY)
)
{code}
 

 
{noformat}
--> WITHOUT SECONDARY GLOBAL INDEX ON ALERT_ID
EXPLAIN SELECT * FROM INDEX_OPT WHERE ALERT_ID = '1'
CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TEST_DB:INDEX_OPT
 SERVER FILTER BY ALERT_ID = '1'
 SERVER 200 ROW LIMIT
CLIENT 200 ROW LIMIT
{noformat}
 

 
{noformat}
--> WITH SECONDARY GLOBAL INDEX ON ALERT_ID
CREATE INDEX MY_INDEX ON INDEX_OPT (ALERT_ID)
EXPLAIN SELECT * FROM INDEX_OPT WHERE ALERT_ID = '1'
 
CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TEST_DB:INDEX_OPT
 SERVER FILTER BY ALERT_ID = '1'
 SERVER 200 ROW LIMIT
CLIENT 200 ROW LIMIT
{noformat}
 

 

As we can even though we have created an index on ALERT_ID but it does still 
shows FULL SCAN if we select all columns, although if we select only the 
PRIMARY KEY, it does RANGE SCAN

 
{noformat}
QUERY 1
EXPLAIN SELECT MAIN_KEY FROM INDEX_OPT WHERE ALERT_ID = '1'
CLIENT 1-CHUNK 200 ROWS 9000 BYTES SERIAL 1-WAY ROUND ROBIN RANGE SCAN OVER 
TEST_DB:MY_INDEX33 ['1']
 SERVER FILTER BY FIRST KEY ONLY
 SERVER 200 ROW LIMIT
CLIENT 200 ROW LIMIT
{noformat}
 

 

 
{noformat}
QUERY 2
EXPLAIN SELECT * FROM INDEX_OPT WHERE MAIN_KEY = '1'
CLIENT 1-CHUNK 1 ROWS 215 BYTES SERIAL 1-WAY ROUND ROBIN POINT LOOKUP ON 1 KEY 
OVER TEST_DB:INDEX_OPT
 SERVER 200 ROW LIMIT
CLIENT 200 ROW LIMIT
{noformat}
 

 

If we look at query 1 and query 2 they both are not doing FULL SCAN, but If I 
write a query

 
{noformat}
EXPLAIN
SELECT * FROM INDEX_OPT WHERE MAIN_KEY = (
SELECT MAIN_KEY FROM INDEX_OPT WHERE ALERT_ID ='1'
)
{noformat}
 
{noformat}
CLIENT 1-CHUNK 200 ROWS 43000 BYTES SERIAL 1-WAY ROUND ROBIN FULL SCAN OVER 
TEST_DB:INDEX_OPT SERVER 200 ROW LIMIT CLIENT 200 ROW LIMIT EXECUTE SINGLE-ROW 
SUBQUERY CLIENT 1-CHUNK 2 ROWS 90 BYTES SERIAL 1-WAY ROUND ROBIN RANGE SCAN 
OVER TEST_DB:MY_INDEX33 ['1'] SERVER FILTER BY FIRST KEY ONLY SERVER 2 ROW 
LIMIT CLIENT 2 ROW LIMIT{noformat}
It's doing a full scan for the MAIN_KEY ?

Instead he could have done a RANGE_SCAN



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5097) Index Scrutiny Tool changes schema name to UPPERCASE

2019-02-18 Thread Amarnath Ramamoorthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amarnath Ramamoorthi updated PHOENIX-5097:
--
Description: 
Creating index table and running *Index Scrutiny Tool.*
{code:java}
CREATE INDEX IF NOT EXISTS "IDX_CAP_DEMO_TABLE" ON "CAP".DEMO_TABLE ("id_x") 
INCLUDE ("id_y");

hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -s CAP -dt 
DEMO_TABLE -it IDX_FOO_DEMO_TABLE -o
{code}
Works fine without error, since the actual schema name is in uppercase (CAP).

However on using lowercase schema name (nocap) it converts it to UPPERCASE.
{code:java}
CREATE INDEX IF NOT EXISTS "IDX_NOCAP_DEMO_TABLE" ON "nocap".DEMO_TABLE 
("id_x") INCLUDE ("id_x");

[amar@locahost ~]$ hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool 
-s nocap -dt DEMO_TABLE -it IDX_NOCAP_DEMO_TABLE -o
...
...
...
19/01/11 13:45:09 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
19/01/11 13:45:11 ERROR index.IndexScrutinyTool: An exception occurred while 
performing the indexing job: IllegalArgumentException:  IDX_NOCAP_DEMO_TABLE is 
not an index table for NOCAP.DEMO_TABLE  at:
java.lang.IllegalArgumentException:  IDX_NOCAP_DEMO_TABLE is not an index table 
for NOCAP.DEMO_TABLE 
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyTool.run(IndexScrutinyTool.java:394)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.phoenix.mapreduce.index.IndexScrutinyTool.main(IndexScrutinyTool.java:518)
{code}
It changes the schema name to uppercase, so input the schema name with quotes 
to ignore the case like
{code:java}
hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool -s "nocap" -dt 
DEMO_TABLE -it IDX_NOCAP_DEMO_TABLE -o
{code}
The same error follows when you use the above command with just quotes. On 
using the following command
{code:java}
[amar@locahost ~]$ hbase org.apache.phoenix.mapreduce.index.IndexScrutinyTool 
-s \"\"nocap\"\" -dt DEMO_TABLE -it IDX_NOCAP_DEMO_TABLE -o
...
...
...
19/01/11 10:34:18 INFO Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
19/01/11 10:34:20 INFO index.IndexScrutinyTool: Running scrutiny 
[schemaName="nocap", dataTable=DEMO_TABLE, indexTable=IDX_NOCAP_DEMO_TABLE, 
useSnapshot=false, timestamp=1547202800130, batchSize=1000, 
outputBasePath=null, outputFormat=TABLE, outputMaxRows=100]
19/01/11 10:34:20 INFO index.IndexScrutinyTool: Query used on source table to 
feed the mapper: SELECT /*+ NO_INDEX */ "id_x","id_y","id_z" FROM 
NOCAP.DEMO_TABLE
19/01/11 10:34:20 INFO index.IndexScrutinyTool: Upsert statement used for 
output table: UPSERT  INTO PHOENIX_INDEX_SCRUTINY ("SOURCE_TABLE", 
"TARGET_TABLE", "SCRUTINY_EXECUTE_TIME", "SOURCE_ROW_PK_HASH", "SOURCE_TS", 
"TARGET_TS", "HAS_TARGET_ROW", "id_x","id_y","id_z"  ) VALUES (?, ?, ?, ?, 
?, ?)
19/01/11 10:34:20 INFO index.IndexScrutinyTool: Query used on source table to 
feed the mapper: SELECT /*+ NO_INDEX */ "id_x","id_y","id_z" FROM 
NOCAP.IDX_NOCAP_DEMO_TABLE
19/01/11 10:34:20 INFO index.IndexScrutinyTool: Upsert statement used for 
output table: UPSERT  INTO PHOENIX_INDEX_SCRUTINY ("SOURCE_TABLE", 
"TARGET_TABLE", "SCRUTINY_EXECUTE_TIME", "SOURCE_ROW_PK_HASH", "SOURCE_TS", 
"TARGET_TS", "HAS_TARGET_ROW", "id_x","id_y","id_z" . ) VALUES (?, ?, ?, ?, 
?, ?)
19/01/11 10:34:21 INFO index.IndexScrutinyTool: Running Index Scrutiny in 
Background - Submit async and exit
19/01/11 10:34:23 ERROR mapreduce.PhoenixInputFormat: Failed to get the query 
plan with error [ERROR 1012 (42M03): Table undefined. 
tableName=NOCAP.DEMO_TABLE]
19/01/11 10:34:23 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
/user/amar/.staging/job_1540390314309_0119
19/01/11 10:34:23 ERROR index.IndexScrutinyTool: An exception occurred while 
performing the indexing job: RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=NOCAP.DEMO_TABLE at:
java.lang.RuntimeException: org.apache.phoenix.schema.TableNotFoundException: 
ERROR 1012 (42M03): Table undefined. tableName=NOCAP.DEMO_TABLE
at 
org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:139)
at 
org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:81)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:305)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:322)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at 

[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-18 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Description: 
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
checkForRegionClosing();
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}


  was:
[~lhofhansl] [~vincentpoon] [~tdsilva] please review

In order to differentiate between the index rebuilder retries  
(UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen in 
the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part of  
PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
{code:java}
commitBatchWithRetries(region, mutations, -1);{code}
blocks the region split as the check for region closing does not happen  
blockingMemstoreSize > 0
{code:java}
for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
  try{
   checkForRegionClosing();
   
{code}
Plan is to have the check for region closing at least once before committing 
the batch
{code:java}
int i = 0;
do {
   try {
 if (i > 0) {
 Thread.sleep(100); 
 }
 checkForRegionClosing();   
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IOException(e);
}
}while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i++ < 30);
{code}



> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)