[jira] [Updated] (PHOENIX-4290) Full table scan performed for DELETE with table having immutable indexes

2017-10-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4290:
--
Attachment: PHOENIX-4290_wip3.patch

> Full table scan performed for DELETE with table having immutable indexes
> 
>
> Key: PHOENIX-4290
> URL: https://issues.apache.org/jira/browse/PHOENIX-4290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4290_wip1.patch, PHOENIX-4290_wip2.patch, 
> PHOENIX-4290_wip3.patch
>
>
> If a DELETE command is issued with a partial match for the leading part of 
> the primary key, instead of using the data table, when the table has 
> immutable indexes, a full scan will occur against the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219974#comment-16219974
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on the issue:

https://github.com/apache/phoenix/pull/277
  
These commits also resolve PHOENIX-4227.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix issue #277: PHOENIX-3757 System mutex table not being created in SYS...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on the issue:

https://github.com/apache/phoenix/pull/277
  
These commits also resolve PHOENIX-4227.


---


[jira] [Commented] (PHOENIX-3945) Introduce new Scalar Function to calculate a collation key from a string

2017-10-25 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219740#comment-16219740
 ] 

Shehzaad Nakhoda commented on PHOENIX-3945:
---

Duplicate of PHOENIX-4237

> Introduce new Scalar Function to calculate a collation key from a string
> 
>
> Key: PHOENIX-3945
> URL: https://issues.apache.org/jira/browse/PHOENIX-3945
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Shehzaad Nakhoda
>
> It is necessary to do sort varchars in a language-sensitive way.
> A scalar function is needed which given a string and a locale should return a 
> byte array that can then be sorted in binary order.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-3945) Introduce new Scalar Function to calculate a collation key from a string

2017-10-25 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda resolved PHOENIX-3945.
---
Resolution: Duplicate

> Introduce new Scalar Function to calculate a collation key from a string
> 
>
> Key: PHOENIX-3945
> URL: https://issues.apache.org/jira/browse/PHOENIX-3945
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Shehzaad Nakhoda
>
> It is necessary to do sort varchars in a language-sensitive way.
> A scalar function is needed which given a string and a locale should return a 
> byte array that can then be sorted in binary order.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4323) LocalIndexes could fail if your data row is not in the same region as your index region

2017-10-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219720#comment-16219720
 ] 

Sergey Soldatov commented on PHOENIX-4323:
--

Well, by design local index data should be stored in the same region all the 
time. But it's quite interesting case when in the region there is room for a 
single rowkey only. Ping [~rajeshbabu]

> LocalIndexes could fail if your data row is not in the same region as your 
> index region
> ---
>
> Key: PHOENIX-4323
> URL: https://issues.apache.org/jira/browse/PHOENIX-4323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Assignee: Vincent Poon
> Attachments: LocalIndexIT.java
>
>
> This is not likely to happen, but if this does your data table and index 
> write will never succeed. 
> In HRegion.doMiniBatchMutation() 
> You create index rows in the preBatchMutate() then when you call checkRow() 
> on that index row the exception will bubble up if the index row is not in the 
> same region as your data row.  
> Like I said this is unlikely, but you would have to do a region merge to fix 
> this issue if encountered.  
> [~vincentpoon] has a test which he will attach to this JIRA showing an 
> example how this can happen. The write will never succeed unless you merge 
> regions if this ever happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4323) LocalIndexes could fail if your data row is not in the same region as your index region

2017-10-25 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219684#comment-16219684
 ] 

Vincent Poon edited comment on PHOENIX-4323 at 10/25/17 10:41 PM:
--

Test to reproduce the issue: testIndexRowDifferentRegion()


was (Author: vincentpoon):
Test to reproduce the issue

> LocalIndexes could fail if your data row is not in the same region as your 
> index region
> ---
>
> Key: PHOENIX-4323
> URL: https://issues.apache.org/jira/browse/PHOENIX-4323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Assignee: Vincent Poon
> Attachments: LocalIndexIT.java
>
>
> This is not likely to happen, but if this does your data table and index 
> write will never succeed. 
> In HRegion.doMiniBatchMutation() 
> You create index rows in the preBatchMutate() then when you call checkRow() 
> on that index row the exception will bubble up if the index row is not in the 
> same region as your data row.  
> Like I said this is unlikely, but you would have to do a region merge to fix 
> this issue if encountered.  
> [~vincentpoon] has a test which he will attach to this JIRA showing an 
> example how this can happen. The write will never succeed unless you merge 
> regions if this ever happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4323) LocalIndexes could fail if your data row is not in the same region as your index region

2017-10-25 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4323:
--
Attachment: LocalIndexIT.java

Test to reproduce the issue

> LocalIndexes could fail if your data row is not in the same region as your 
> index region
> ---
>
> Key: PHOENIX-4323
> URL: https://issues.apache.org/jira/browse/PHOENIX-4323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
> Attachments: LocalIndexIT.java
>
>
> This is not likely to happen, but if this does your data table and index 
> write will never succeed. 
> In HRegion.doMiniBatchMutation() 
> You create index rows in the preBatchMutate() then when you call checkRow() 
> on that index row the exception will bubble up if the index row is not in the 
> same region as your data row.  
> Like I said this is unlikely, but you would have to do a region merge to fix 
> this issue if encountered.  
> [~vincentpoon] has a test which he will attach to this JIRA showing an 
> example how this can happen. The write will never succeed unless you merge 
> regions if this ever happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4323) LocalIndexes could fail if your data row is not in the same region as your index region

2017-10-25 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales reassigned PHOENIX-4323:
---

Assignee: Vincent Poon

> LocalIndexes could fail if your data row is not in the same region as your 
> index region
> ---
>
> Key: PHOENIX-4323
> URL: https://issues.apache.org/jira/browse/PHOENIX-4323
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Assignee: Vincent Poon
> Attachments: LocalIndexIT.java
>
>
> This is not likely to happen, but if this does your data table and index 
> write will never succeed. 
> In HRegion.doMiniBatchMutation() 
> You create index rows in the preBatchMutate() then when you call checkRow() 
> on that index row the exception will bubble up if the index row is not in the 
> same region as your data row.  
> Like I said this is unlikely, but you would have to do a region merge to fix 
> this issue if encountered.  
> [~vincentpoon] has a test which he will attach to this JIRA showing an 
> example how this can happen. The write will never succeed unless you merge 
> regions if this ever happens. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4323) LocalIndexes could fail if your data row is not in the same region as your index region

2017-10-25 Thread churro morales (JIRA)
churro morales created PHOENIX-4323:
---

 Summary: LocalIndexes could fail if your data row is not in the 
same region as your index region
 Key: PHOENIX-4323
 URL: https://issues.apache.org/jira/browse/PHOENIX-4323
 Project: Phoenix
  Issue Type: Bug
Reporter: churro morales


This is not likely to happen, but if this does your data table and index write 
will never succeed. 

In HRegion.doMiniBatchMutation() 
You create index rows in the preBatchMutate() then when you call checkRow() on 
that index row the exception will bubble up if the index row is not in the same 
region as your data row.  

Like I said this is unlikely, but you would have to do a region merge to fix 
this issue if encountered.  

[~vincentpoon] has a test which he will attach to this JIRA showing an example 
how this can happen. The write will never succeed unless you merge regions if 
this ever happens. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4322) DESC primary key column with variable length does not work in SkipScanFilter

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219638#comment-16219638
 ] 

ASF GitHub Bot commented on PHOENIX-4322:
-

GitHub user maryannxue opened a pull request:

https://github.com/apache/phoenix/pull/278

PHOENIX-4322 DESC primary key column with variable length does not work in 
SkipScanFilter

Changes:
Avoid adding an extra trailing separator to the key

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maryannxue/phoenix phoenix-4322

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/278.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #278






> DESC primary key column with variable length does not work in SkipScanFilter
> 
>
> Key: PHOENIX-4322
> URL: https://issues.apache.org/jira/browse/PHOENIX-4322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
>
> Example:
> {code}
> @Test
> public void inDescCompositePK3() throws Exception {
> String table = generateUniqueName();
> String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
> Object[][] insertedRows = new Object[][]{{"o1", "1"}, {"o2", "2"}, 
> {"o3", "3"}};
> runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
> Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
> "(('o2', '2'), ('o1', '1'))"),
> table);
> }
> {code}
> Here the last column in primary key is in DESC order and has variable length, 
> and WHERE clause involves an "IN" operator with RowValueConstructor 
> specifying all PK columns. We get no results.
> This ends up being the root cause for not being able to use child/parent join 
> optimization on DESC pk columns as described in PHOENIX-3050.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #278: PHOENIX-4322 DESC primary key column with variabl...

2017-10-25 Thread maryannxue
GitHub user maryannxue opened a pull request:

https://github.com/apache/phoenix/pull/278

PHOENIX-4322 DESC primary key column with variable length does not work in 
SkipScanFilter

Changes:
Avoid adding an extra trailing separator to the key

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maryannxue/phoenix phoenix-4322

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/278.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #278






---


[jira] [Created] (PHOENIX-4322) DESC primary key column with variable length does not work in SkipScanFilter

2017-10-25 Thread Maryann Xue (JIRA)
Maryann Xue created PHOENIX-4322:


 Summary: DESC primary key column with variable length does not 
work in SkipScanFilter
 Key: PHOENIX-4322
 URL: https://issues.apache.org/jira/browse/PHOENIX-4322
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0
Reporter: Maryann Xue
Assignee: Maryann Xue
Priority: Minor


Example:
{code}
@Test
public void inDescCompositePK3() throws Exception {
String table = generateUniqueName();
String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, code 
VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
Object[][] insertedRows = new Object[][]{{"o1", "1"}, {"o2", "2"}, 
{"o3", "3"}};
runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
"(('o2', '2'), ('o1', '1'))"),
table);
}
{code}
Here the last column in primary key is in DESC order and has variable length, 
and WHERE clause involves an "IN" operator with RowValueConstructor specifying 
all PK columns. We get no results.

This ends up being the root cause for not being able to use child/parent join 
optimization on DESC pk columns as described in PHOENIX-3050.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146998114
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

No need to catch the IOException, just let it bubble up so that the user 
knows there is an issue.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219611#comment-16219611
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146998114
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

No need to catch the IOException, just let it bubble up so that the user 
knows there is an issue.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219540#comment-16219540
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user aertoria commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146990595
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

I think in those case you can just ignore the exception that you want to 
tolerate. So at least SQLException should be bubbled up.




> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread aertoria
Github user aertoria commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146990595
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

I think in those case you can just ignore the exception that you want to 
tolerate. So at least SQLException should be bubbled up.




---


[jira] [Comment Edited] (PHOENIX-4287) Incorrect aggregate query results when stats are disable for parallelization

2017-10-25 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219532#comment-16219532
 ] 

Mujtaba Chohan edited comment on PHOENIX-4287 at 10/25/17 9:15 PM:
---

[~samarthjain] Just checked, this affects tables *without* any index as well as 
tables with global indexes.


was (Author: mujtabachohan):
[~samarthjain] Just checked, this affects tables with *without* any index as 
well as tables with global indexes.

> Incorrect aggregate query results when stats are disable for parallelization
> 
>
> Key: PHOENIX-4287
> URL: https://issues.apache.org/jira/browse/PHOENIX-4287
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: HBase 1.3.1
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
>  Labels: localIndex
> Fix For: 4.12.1
>
>
> With {{phoenix.use.stats.parallelization}} set to {{false}}, aggregate query 
> returns incorrect results when stats are available.
> With local index and stats disabled for parallelization:
> {noformat}
> explain select count(*) from TABLE_T;
> +---+-++---+
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO |
> +---+-++---+
> | CLIENT 0-CHUNK 332170 ROWS 625043899 BYTES PARALLEL 0-WAY RANGE SCAN OVER 
> TABLE_T [1]  | 625043899   | 332170 | 150792825 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 625043899   | 332170 | 150792825 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 625043899   | 332170 | 150792825 |
> +---+-++---+
> select count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {noformat}
> Using data table
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-+++
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
> +--+-+++
> | CLIENT 2-CHUNK 332151 ROWS 438492470 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> TABLE_T  | 438492470   | 332151 | 1507928257617  |
> | SERVER FILTER BY FIRST KEY ONLY 
>  | 438492470   | 332151 | 1507928257617  |
> | SERVER AGGREGATE INTO SINGLE ROW
>  | 438492470   | 332151 | 1507928257617  |
> +--+-+++
> select /*+NO_INDEX*/ count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 14|
> +---+
> {noformat}
> Without stats available, results are correct:
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-++--+
> | PLAN | 
> EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 2-CHUNK PARALLEL 1-WAY FULL SCAN OVER TABLE_T  | null| 
> null   | null |
> | SERVER FILTER BY FIRST KEY ONLY  | null 
>| null   | null |
> | SERVER AGGREGATE INTO SINGLE ROW | null 
>| null   | null |
> +--+-++--+
> select /*+NO_INDEX*/ count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 27|
> +---+
> {noformat}



--
This message was 

[jira] [Commented] (PHOENIX-4287) Incorrect aggregate query results when stats are disable for parallelization

2017-10-25 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219532#comment-16219532
 ] 

Mujtaba Chohan commented on PHOENIX-4287:
-

[~samarthjain] Just checked, this affects tables with *without* any index as 
well as tables with global indexes.

> Incorrect aggregate query results when stats are disable for parallelization
> 
>
> Key: PHOENIX-4287
> URL: https://issues.apache.org/jira/browse/PHOENIX-4287
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: HBase 1.3.1
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
>  Labels: localIndex
> Fix For: 4.12.1
>
>
> With {{phoenix.use.stats.parallelization}} set to {{false}}, aggregate query 
> returns incorrect results when stats are available.
> With local index and stats disabled for parallelization:
> {noformat}
> explain select count(*) from TABLE_T;
> +---+-++---+
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO |
> +---+-++---+
> | CLIENT 0-CHUNK 332170 ROWS 625043899 BYTES PARALLEL 0-WAY RANGE SCAN OVER 
> TABLE_T [1]  | 625043899   | 332170 | 150792825 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 625043899   | 332170 | 150792825 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 625043899   | 332170 | 150792825 |
> +---+-++---+
> select count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {noformat}
> Using data table
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-+++
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
> +--+-+++
> | CLIENT 2-CHUNK 332151 ROWS 438492470 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> TABLE_T  | 438492470   | 332151 | 1507928257617  |
> | SERVER FILTER BY FIRST KEY ONLY 
>  | 438492470   | 332151 | 1507928257617  |
> | SERVER AGGREGATE INTO SINGLE ROW
>  | 438492470   | 332151 | 1507928257617  |
> +--+-+++
> select /*+NO_INDEX*/ count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 14|
> +---+
> {noformat}
> Without stats available, results are correct:
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-++--+
> | PLAN | 
> EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 2-CHUNK PARALLEL 1-WAY FULL SCAN OVER TABLE_T  | null| 
> null   | null |
> | SERVER FILTER BY FIRST KEY ONLY  | null 
>| null   | null |
> | SERVER AGGREGATE INTO SINGLE ROW | null 
>| null   | null |
> +--+-++--+
> select /*+NO_INDEX*/ count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 27|
> +---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219520#comment-16219520
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146986800
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

@twdsilva @aertoria Any thoughts?


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146986800
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

@twdsilva @aertoria Any thoughts?


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219516#comment-16219516
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on the issue:

https://github.com/apache/phoenix/pull/277
  
> Looks alright to me, but I don't have a lot of context to the intricacy 
of the change. 

It would be great if you can apply this patch locally and try out a sqlline 
based end to end testing.

> Thanks for your hard work here, Karan!

Thank you and Thanks for your inputs!



> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix issue #277: PHOENIX-3757 System mutex table not being created in SYS...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on the issue:

https://github.com/apache/phoenix/pull/277
  
> Looks alright to me, but I don't have a lot of context to the intricacy 
of the change. 

It would be great if you can apply this patch locally and try out a sqlline 
based end to end testing.

> Thanks for your hard work here, Karan!

Thank you and Thanks for your inputs!



---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219507#comment-16219507
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146985453
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

Earlier, we didn't have any locking during table migration. So its a race 
condition which can lead to unexpected results. 


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: 

[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146985453
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

Earlier, we didn't have any locking during table migration. So its a race 
condition which can lead to unexpected results. 


---


[jira] [Commented] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219492#comment-16219492
 ] 

Samarth Jain commented on PHOENIX-4320:
---

Something wrong with my setup. [~mujtabachohan] just pushed a commit and fixed 
it. Thanks Mujtaba.

> Update website pages with information on phoenix.use.stats.parallelization 
> confi
> 
>
> Key: PHOENIX-4320
> URL: https://issues.apache.org/jira/browse/PHOENIX-4320
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4320.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219455#comment-16219455
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146974246
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

Yes, this is a serious case that we should discuss. The correct thing to do 
is to probably FAIL the connection itself or possibly have a retry logic for 
creating the table. 

This is because in the `acquireUpgradeMutex()` method we check if either 
SYSMUTEX exists or SYS:MUTEX table exists. The only possible case where both of 
those tables can be missing is when a client is trying to migrate the table, 
which disables the old table and creates the new one. There is a brief period 
of time when none of these table exists. Hence we throw 
`UpgradeInProgressException` exception in such a case.

We have no way to determine if the table doesn't exist at well v/s the 
point that the table is in migration.
Is there any other scenario in which this can affect?


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146974246
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

Yes, this is a serious case that we should discuss. The correct thing to do 
is to probably FAIL the connection itself or possibly have a retry logic for 
creating the table. 

This is because in the `acquireUpgradeMutex()` method we check if either 
SYSMUTEX exists or SYS:MUTEX table exists. The only possible case where both of 
those tables can be missing is when a client is trying to migrate the table, 
which disables the old table and creates the new one. There is a brief period 
of time when none of these table exists. Hence we throw 
`UpgradeInProgressException` exception in such a case.

We have no way to determine if the table doesn't exist at well v/s the 
point that the table is in migration.
Is there any other scenario in which this can affect?


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146973563
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

Yup, fine by me. It seems to me to be the same as it was previously :) The 
mutex table update would provide exclusion for both same and different JVM 
cases.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219444#comment-16219444
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146972984
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Added it in a new commit.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219445#comment-16219445
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146973010
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3086,12 +3103,18 @@ private void restoreFromSnapshot(String tableName, 
String snapshotName,
 }
 }
 
-void ensureSystemTablesUpgraded(ReadOnlyProps props)
+void ensureSystemTablesMigratedToSystemNamespace(ReadOnlyProps props)
 throws SQLException, IOException, IllegalArgumentException, 
InterruptedException {
 if (!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM, 
props)) { return; }
+
+boolean acquiredMutexLock = false;
+byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
+
 HTableInterface metatable = null;
 try (HBaseAdmin admin = getAdmin()) {
-// Namespace-mapping is enabled at this point.
+ // SYSTEM namespace needs to be created via HBase API's 
because "CREATE SCHEMA" statement tries to write its metadata
+ // in SYSTEM:CATALOG table. Without SYSTEM namespace, 
SYSTEM:CATALOG table cannot be created.
--- End diff --

Thanks!


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219447#comment-16219447
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146973086
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2514,7 +2529,7 @@ private void createSysMutexTable(HBaseAdmin admin) 
throws IOException, SQLExcept
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
-private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
+private void createOtherSystemTables(PhoenixConnection metaConnection, 
HBaseAdmin hBaseAdmin) throws SQLException {
--- End diff --

Added it in a new commit.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146973010
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3086,12 +3103,18 @@ private void restoreFromSnapshot(String tableName, 
String snapshotName,
 }
 }
 
-void ensureSystemTablesUpgraded(ReadOnlyProps props)
+void ensureSystemTablesMigratedToSystemNamespace(ReadOnlyProps props)
 throws SQLException, IOException, IllegalArgumentException, 
InterruptedException {
 if (!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM, 
props)) { return; }
+
+boolean acquiredMutexLock = false;
+byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
+
 HTableInterface metatable = null;
 try (HBaseAdmin admin = getAdmin()) {
-// Namespace-mapping is enabled at this point.
+ // SYSTEM namespace needs to be created via HBase API's 
because "CREATE SCHEMA" statement tries to write its metadata
+ // in SYSTEM:CATALOG table. Without SYSTEM namespace, 
SYSTEM:CATALOG table cannot be created.
--- End diff --

Thanks!


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146973086
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2514,7 +2529,7 @@ private void createSysMutexTable(HBaseAdmin admin) 
throws IOException, SQLExcept
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
-private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
+private void createOtherSystemTables(PhoenixConnection metaConnection, 
HBaseAdmin hBaseAdmin) throws SQLException {
--- End diff --

Added it in a new commit.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219443#comment-16219443
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146972829
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3195,6 +3255,18 @@ public boolean releaseUpgradeMutex(byte[] 
mutexRowKey) {
 return released;
 }
 
+private byte[] getSysMutexPhysicalTableNameBytes() throws IOException, 
SQLException {
+byte[] sysMutexPhysicalTableNameBytes = null;
+try(HBaseAdmin admin = getAdmin()) {
+
if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
--- End diff --

Added it in a new commit.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146972984
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Added it in a new commit.


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146972829
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3195,6 +3255,18 @@ public boolean releaseUpgradeMutex(byte[] 
mutexRowKey) {
 return released;
 }
 
+private byte[] getSysMutexPhysicalTableNameBytes() throws IOException, 
SQLException {
+byte[] sysMutexPhysicalTableNameBytes = null;
+try(HBaseAdmin admin = getAdmin()) {
+
if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
--- End diff --

Added it in a new commit.


---


[jira] [Commented] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219433#comment-16219433
 ] 

Samarth Jain commented on PHOENIX-4320:
---

Oops. Will fix it right away.

> Update website pages with information on phoenix.use.stats.parallelization 
> confi
> 
>
> Key: PHOENIX-4320
> URL: https://issues.apache.org/jira/browse/PHOENIX-4320
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4320.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219411#comment-16219411
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146968511
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

This SYSMUTEX table is in place to make sure that clients in separate JVM's 
don't carry out the same task at the same time. I think @twdsilva and 
@samarthjain jain discovered the bug where multiple clients tried to upgrade 
SYSCAT at the same time and that is when the SYSMUTEX table was introduced. I 
feel that a similar race condition can be triggered over here, so a 

[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146968511
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

This SYSMUTEX table is in place to make sure that clients in separate JVM's 
don't carry out the same task at the same time. I think @twdsilva and 
@samarthjain jain discovered the bug where multiple clients tried to upgrade 
SYSCAT at the same time and that is when the SYSMUTEX table was introduced. I 
feel that a similar race condition can be triggered over here, so a mutual 
exclusion is required, which is done via SYSMUTEX table. 
Does this answer your question? 


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219406#comment-16219406
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146967826
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
--- End diff --

The code will never reach to the else part. If the mutex lock fails to get 
acquired, it will throw `UpgradeInProgressException` and the code will return 
back, leaving for the client to retry again.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146967826
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
--- End diff --

The code will never reach to the else part. If the mutex lock fails to get 
acquired, it will throw `UpgradeInProgressException` and the code will return 
back, leaving for the client to retry again.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219401#comment-16219401
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146967462
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Will do.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146967462
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Will do.


---


[jira] [Commented] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219392#comment-16219392
 ] 

Sergey Soldatov commented on PHOENIX-4320:
--

[~samarthjain] It looks like you accidentally  deleted the page about data 
types. Could you get it back ? :)

> Update website pages with information on phoenix.use.stats.parallelization 
> confi
> 
>
> Key: PHOENIX-4320
> URL: https://issues.apache.org/jira/browse/PHOENIX-4320
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4320.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2017-10-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219370#comment-16219370
 ] 

Josh Elser commented on PHOENIX-4321:
-

I don't know of a reason to not try to use the HBase TableName class where we 
can, but would also not want to force a move to that immediately if we don't 
have to :)

> Replace deprecated HBaseAdmin with Admin
> 
>
> Key: PHOENIX-4321
> URL: https://issues.apache.org/jira/browse/PHOENIX-4321
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Fix For: 4.13.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2017-10-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219358#comment-16219358
 ] 

Sergey Soldatov commented on PHOENIX-4321:
--

One more note. Admin is using TableName class instead of byte[] for table 
names. Does it worth to do refactoring for all our interfaces such as 
ConnectionQueryServices to use TableName (actually we have our own TableName 
class, so in addition we may want to rename it as well) or we can localize the 
changes by using TableName.fromBytes() in admin calls? Personally I would 
prefer to see usage of TableName everywhere even if it requires more work to 
get it done. WDYT [~jamestaylor] [~elserj] 

> Replace deprecated HBaseAdmin with Admin
> 
>
> Key: PHOENIX-4321
> URL: https://issues.apache.org/jira/browse/PHOENIX-4321
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Fix For: 4.13.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2017-10-25 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4321:
-
Labels: HBase-2.0  (was: )

> Replace deprecated HBaseAdmin with Admin
> 
>
> Key: PHOENIX-4321
> URL: https://issues.apache.org/jira/browse/PHOENIX-4321
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Fix For: 4.13.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2017-10-25 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4321:


 Summary: Replace deprecated HBaseAdmin with Admin
 Key: PHOENIX-4321
 URL: https://issues.apache.org/jira/browse/PHOENIX-4321
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-4320.
---
Resolution: Fixed

> Update website pages with information on phoenix.use.stats.parallelization 
> confi
> 
>
> Key: PHOENIX-4320
> URL: https://issues.apache.org/jira/browse/PHOENIX-4320
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4320.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4320:
--
Attachment: PHOENIX-4320.patch

> Update website pages with information on phoenix.use.stats.parallelization 
> confi
> 
>
> Key: PHOENIX-4320
> URL: https://issues.apache.org/jira/browse/PHOENIX-4320
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4320.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4320) Update website pages with information on phoenix.use.stats.parallelization confi

2017-10-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4320:
-

 Summary: Update website pages with information on 
phoenix.use.stats.parallelization confi
 Key: PHOENIX-4320
 URL: https://issues.apache.org/jira/browse/PHOENIX-4320
 Project: Phoenix
  Issue Type: Task
Reporter: Samarth Jain
Assignee: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146928850
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

If there isn't any mutual exclusion happening at a higher level, most 
definitely we should make sure this isn't happening concurrently.

What about clients in separate JVMs though? What's the "prior art" for the 
rest of the SYSTEM tables?


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219136#comment-16219136
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146928850
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

If there isn't any mutual exclusion happening at a higher level, most 
definitely we should make sure this isn't happening concurrently.

What about clients in separate JVMs though? What's the "prior art" for the 
rest of the SYSTEM tables?


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> 

[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219132#comment-16219132
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146928295
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

According to my understanding and discussions with @twdsilva, I believe 
that multiple clients should not be allowed to carry on this portion of code 
simultaneously. For example, a client is trying to upgrade the SYSCAT version 
and another client connects with the property 
`phoenix.schema.mapSystemTablesToNamespace` to true. The latter client might 
disable SYSCAT and 

[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146928345
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2514,7 +2529,7 @@ private void createSysMutexTable(HBaseAdmin admin) 
throws IOException, SQLExcept
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
-private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
+private void createOtherSystemTables(PhoenixConnection metaConnection, 
HBaseAdmin hBaseAdmin) throws SQLException {
--- End diff --

Will do.


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146928295
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

According to my understanding and discussions with @twdsilva, I believe 
that multiple clients should not be allowed to carry on this portion of code 
simultaneously. For example, a client is trying to upgrade the SYSCAT version 
and another client connects with the property 
`phoenix.schema.mapSystemTablesToNamespace` to true. The latter client might 
disable SYSCAT and enable it to SYS:CAT, in such a case the client operations 
from former client might leave SYSCAT inconsistent.
@joshelser Let me know your thoughts about it.



---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219097#comment-16219097
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146924391
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
--- End diff --

> My assumption was that the intent was to drop any cached PTables if we 
moved any system tables around. Given the above, it sounds like this is (now?) 
unnecessary.
That's correct. We indeed need to remove it from cache because once the 
migration is complete, we add a row in `SYSCAT` indicating that 
`IS_NAMESPACE_MAPPED` to true. Although it seems unnecessary, I don't really 
want to rely on the coprocessor hook as it may change any time. An explicit 
`clearCache()` should not harm it, I believe.

> Is that being down in a 

[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146924391
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
--- End diff --

> My assumption was that the intent was to drop any cached PTables if we 
moved any system tables around. Given the above, it sounds like this is (now?) 
unnecessary.
That's correct. We indeed need to remove it from cache because once the 
migration is complete, we add a row in `SYSCAT` indicating that 
`IS_NAMESPACE_MAPPED` to true. Although it seems unnecessary, I don't really 
want to rely on the coprocessor hook as it may change any time. An explicit 
`clearCache()` should not harm it, I believe.

> Is that being down in a CP too? If so, would it make sense to do this 
cache-clearing there (instead of client side)?
 What is CP? I didn't quite get you.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219079#comment-16219079
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146921986
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
--- End diff --

Added this in the new commit.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146921986
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
--- End diff --

Added this in the new commit.


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread karanmehta93
Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146921855
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 ---
@@ -0,0 +1,399 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.phoenix.coprocessor.MetaDataProtocol;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+import static org.junit.Assert.*;
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class MigrateSystemTablesToSystemNamespaceIT extends BaseTest {
+
+private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
+"SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
+"SYSTEM.MUTEX"));
+private static final Set 
PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = new HashSet<>(
+Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", 
"SYSTEM:STATS", "SYSTEM:FUNCTION",
+"SYSTEM:MUTEX"));
+private static final String SCHEMA_NAME = "MIGRATETEST";
+private static final String TABLE_NAME =
+SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();
+private static final int NUM_RECORDS = 5;
+
+private HBaseTestingUtility testUtil = null;
+private Set hbaseTables;
+
+// Create Multiple users since Phoenix caches the connection per user
+// Migration or upgrade code will run every time for each user.
+final UserGroupInformation user1 =
+UserGroupInformation.createUserForTesting("user1", new 
String[0]);
+final UserGroupInformation user2 =
+UserGroupInformation.createUserForTesting("user2", new 
String[0]);
+final UserGroupInformation user3 =
+UserGroupInformation.createUserForTesting("user3", new 
String[0]);
+final UserGroupInformation user4 =
+UserGroupInformation.createUserForTesting("user4", new 
String[0]);
+
+
+@Before
+public final void doSetup() throws Exception {
+testUtil = new HBaseTestingUtility();
+Configuration conf = testUtil.getConfiguration();
+enableNamespacesOnServer(conf);
+testUtil.startMiniCluster(1);
+}
+
+@After
+public void tearDownMiniCluster() {
+try {
+if (testUtil != null) {
+testUtil.shutdownMiniCluster();
+testUtil = null;
+}
+} catch (Exception e) {
+// ignore
+}
+}
+
+// Tests that client can create and read tables on a fresh HBase 

[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16219078#comment-16219078
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user karanmehta93 commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146921855
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 ---
@@ -0,0 +1,399 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.phoenix.coprocessor.MetaDataProtocol;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+import static org.junit.Assert.*;
+
+@Category(NeedsOwnMiniClusterTest.class)
+public class MigrateSystemTablesToSystemNamespaceIT extends BaseTest {
+
+private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
+"SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
+"SYSTEM.MUTEX"));
+private static final Set 
PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = new HashSet<>(
+Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", 
"SYSTEM:STATS", "SYSTEM:FUNCTION",
+"SYSTEM:MUTEX"));
+private static final String SCHEMA_NAME = "MIGRATETEST";
+private static final String TABLE_NAME =
+SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();
+private static final int NUM_RECORDS = 5;
+
+private HBaseTestingUtility testUtil = null;
+private Set hbaseTables;
+
+// Create Multiple users since Phoenix caches the connection per user
+// Migration or upgrade code will run every time for each user.
+final UserGroupInformation user1 =
+UserGroupInformation.createUserForTesting("user1", new 
String[0]);
+final UserGroupInformation user2 =
+UserGroupInformation.createUserForTesting("user2", new 
String[0]);
+final UserGroupInformation user3 =
+UserGroupInformation.createUserForTesting("user3", new 
String[0]);
+final UserGroupInformation user4 =
+UserGroupInformation.createUserForTesting("user4", new 
String[0]);
+
+
+@Before
+public final void doSetup() throws Exception {
+testUtil = new HBaseTestingUtility();
+Configuration conf = testUtil.getConfiguration();
+enableNamespacesOnServer(conf);
+testUtil.startMiniCluster(1);
+}
+
+@After
+public void tearDownMiniCluster() {
+try {
+if (testUtil != null) {
+

[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218971#comment-16218971
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146908767
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Change 0 to MIN_SYSTEM_TABLE_MIGRATION_TIMESTAMP


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread twdsilva
Github user twdsilva commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146908767
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
--- End diff --

Change 0 to MIN_SYSTEM_TABLE_MIGRATION_TIMESTAMP


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902970
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
--- End diff --

Log a debug message for the `else` case too? Easier than finding the lack 
of the above message when debugging..


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146903705
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
--- End diff --

>  Also, we clear server side cache anytime a SYSTEM table is disabled, 
using a coprocessor hook.

My assumption was that the intent was to drop any cached PTables if we 
moved any system tables around. Given the above, it sounds like this is (now?) 
unnecessary.

> So I decided to simplify the logic and clear the cache every time. Its 
good to do that because the server side cache will go out of sync with the 
SYSCAT table once the migration happens.

Is that being down in a CP too? If so, would it make sense to do this 
cache-clearing there (instead of client side)?


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218949#comment-16218949
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146903705
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
--- End diff --

>  Also, we clear server side cache anytime a SYSTEM table is disabled, 
using a coprocessor hook.

My assumption was that the intent was to drop any cached PTables if we 
moved any system tables around. Given the above, it sounds like this is (now?) 
unnecessary.

> So I decided to simplify the logic and clear the cache every time. Its 
good to do that because the server side cache will go out of sync with the 
SYSCAT table once the migration happens.

Is that being down in a CP too? If so, would it make sense to do 

[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218953#comment-16218953
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902673
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3086,12 +3103,18 @@ private void restoreFromSnapshot(String tableName, 
String snapshotName,
 }
 }
 
-void ensureSystemTablesUpgraded(ReadOnlyProps props)
+void ensureSystemTablesMigratedToSystemNamespace(ReadOnlyProps props)
 throws SQLException, IOException, IllegalArgumentException, 
InterruptedException {
 if (!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM, 
props)) { return; }
+
+boolean acquiredMutexLock = false;
+byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
+
 HTableInterface metatable = null;
 try (HBaseAdmin admin = getAdmin()) {
-// Namespace-mapping is enabled at this point.
+ // SYSTEM namespace needs to be created via HBase API's 
because "CREATE SCHEMA" statement tries to write its metadata
+ // in SYSTEM:CATALOG table. Without SYSTEM namespace, 
SYSTEM:CATALOG table cannot be created.
--- End diff --

nice comments here and elsewhere in this method.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902673
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3086,12 +3103,18 @@ private void restoreFromSnapshot(String tableName, 
String snapshotName,
 }
 }
 
-void ensureSystemTablesUpgraded(ReadOnlyProps props)
+void ensureSystemTablesMigratedToSystemNamespace(ReadOnlyProps props)
 throws SQLException, IOException, IllegalArgumentException, 
InterruptedException {
 if (!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM, 
props)) { return; }
+
+boolean acquiredMutexLock = false;
+byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
+
 HTableInterface metatable = null;
 try (HBaseAdmin admin = getAdmin()) {
-// Namespace-mapping is enabled at this point.
+ // SYSTEM namespace needs to be created via HBase API's 
because "CREATE SCHEMA" statement tries to write its metadata
+ // in SYSTEM:CATALOG table. Without SYSTEM namespace, 
SYSTEM:CATALOG table cannot be created.
--- End diff --

nice comments here and elsewhere in this method.


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218947#comment-16218947
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902373
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

What about the case where the table creation failed for a legitimate system 
reason (something an admin needs to correct)? We should at least have some 
warning to the user that we failed to created the table, right?


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902373
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2526,8 +2541,14 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection) throws SQ
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_FUNCTION_METADATA);
 } catch (TableAlreadyExistsException ignore) {}
+// We catch TableExistsException in createSysMutexTable() and 
ignore it. Hence we will also ignore IOException here.
+// SYSTEM.MUTEX table should not be exposed to user. Hence it is 
directly created and used via HBase API.
+// Using 'CREATE TABLE' statement will add entries to 
SYSTEM.CATALOG table, which should not happen.
+try {
+createSysMutexTable(hBaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+} catch (IOException ignore) {}
--- End diff --

What about the case where the table creation failed for a legitimate system 
reason (something an admin needs to correct)? We should at least have some 
warning to the user that we failed to created the table, right?


---


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218952#comment-16218952
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146901856
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2514,7 +2529,7 @@ private void createSysMutexTable(HBaseAdmin admin) 
throws IOException, SQLExcept
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
-private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
+private void createOtherSystemTables(PhoenixConnection metaConnection, 
HBaseAdmin hBaseAdmin) throws SQLException {
--- End diff --

nit: s/hBaseAdmin/hbaseAdmin/ camel-case typically isn't observed on 
"hbase" ;)


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218948#comment-16218948
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146902970
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,72 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
--- End diff --

Log a debug message for the `else` case too? Easier than finding the lack 
of the above message when debugging..


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218950#comment-16218950
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146904748
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3195,6 +3255,18 @@ public boolean releaseUpgradeMutex(byte[] 
mutexRowKey) {
 return released;
 }
 
+private byte[] getSysMutexPhysicalTableNameBytes() throws IOException, 
SQLException {
+byte[] sysMutexPhysicalTableNameBytes = null;
+try(HBaseAdmin admin = getAdmin()) {
+
if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
--- End diff --

I assume `PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES` is `byte[]`? 
Would be better to use the `TableName` variable to avoid deprecated API warning.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Karan Mehta
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.13.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, 
> PHOENIX-3757.003.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-10-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218951#comment-16218951
 ] 

ASF GitHub Bot commented on PHOENIX-3757:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146904346
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

I have no good explanation for why it was the way it was when I last looked 
at it. I can only assumed I left it as it was (really, just extracted part of 
this one method into another). If you think it should also be marked 
synchronized, go for it.


> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> 

[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146904748
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3195,6 +3255,18 @@ public boolean releaseUpgradeMutex(byte[] 
mutexRowKey) {
 return released;
 }
 
+private byte[] getSysMutexPhysicalTableNameBytes() throws IOException, 
SQLException {
+byte[] sysMutexPhysicalTableNameBytes = null;
+try(HBaseAdmin admin = getAdmin()) {
+
if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
--- End diff --

I assume `PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES` is `byte[]`? 
Would be better to use the `TableName` variable to avoid deprecated API warning.


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146904346
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -3101,49 +3124,68 @@ void ensureSystemTablesUpgraded(ReadOnlyProps props)
 // Regardless of the case 1 or 2, if the NS does not 
exist, we will error expectedly
 // below. If the NS does exist and is mapped, the below 
check will exit gracefully.
 }
-
+
 List tableNames = getSystemTableNames(admin);
 // No tables exist matching "SYSTEM\..*", they are all already 
in "SYSTEM:.*"
 if (tableNames.size() == 0) { return; }
 // Try to move any remaining tables matching "SYSTEM\..*" into 
"SYSTEM:"
 if (tableNames.size() > 5) {
 logger.warn("Expected 5 system tables but found " + 
tableNames.size() + ":" + tableNames);
 }
+
+// Try acquiring a lock in SYSMUTEX table before migrating the 
tables since it involves disabling the table
+// If we cannot acquire lock, it means some old client is 
either migrating SYSCAT or trying to upgrade the
+// schema of SYSCAT table and hence it should not be 
interrupted
+acquiredMutexLock = acquireUpgradeMutex(0, mutexRowKey);
+if(acquiredMutexLock) {
+logger.debug("Acquired lock in SYSMUTEX table for 
migrating SYSTEM tables to SYSTEM namespace");
+}
+// We will not reach here if we fail to acquire the lock, 
since it throws UpgradeInProgressException
+
+// Handle the upgrade of SYSMUTEX table separately since it 
doesn't have any entries in SYSCAT
+String sysMutexSrcTableName = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME;
+String sysMutexDestTableName = 
SchemaUtil.getPhysicalName(sysMutexSrcTableName.getBytes(), 
props).getNameAsString();
+UpgradeUtil.mapTableToNamespace(admin, sysMutexSrcTableName, 
sysMutexDestTableName, PTableType.SYSTEM);
+
 byte[] mappedSystemTable = SchemaUtil
 
.getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, 
props).getName();
 metatable = getTable(mappedSystemTable);
 if 
(tableNames.contains(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME)) {
 if (!admin.tableExists(mappedSystemTable)) {
+// Actual migration of SYSCAT table
 UpgradeUtil.mapTableToNamespace(admin, metatable,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
props, null, PTableType.SYSTEM,
 null);
+// Invalidate the client-side metadataCache
 ConnectionQueryServicesImpl.this.removeTable(null,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME, 
null,
 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
 
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
 }
-
tableNames.remove(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME);
 for (TableName table : tableNames) {
 UpgradeUtil.mapTableToNamespace(admin, metatable, 
table.getNameAsString(), props, null, PTableType.SYSTEM,
 null);
 ConnectionQueryServicesImpl.this.removeTable(null, 
table.getNameAsString(), null,
 MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_1_0);
 }
-if (!tableNames.isEmpty()) {
-clearCache();
-}
+
+// Clear the server-side metadataCache when all tables are 
migrated so that the new PTable can be loaded with NS mapping
+clearCache();
 } finally {
 if (metatable != null) {
 metatable.close();
 }
+if(acquiredMutexLock) {
--- End diff --

I have no good explanation for why it was the way it was when I last looked 
at it. I can only assumed I left it as it was (really, just extracted part of 
this one method into another). If you think it should also be marked 
synchronized, go for it.


---


[GitHub] phoenix pull request #277: PHOENIX-3757 System mutex table not being created...

2017-10-25 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/277#discussion_r146901856
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2514,7 +2529,7 @@ private void createSysMutexTable(HBaseAdmin admin) 
throws IOException, SQLExcept
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
-private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
+private void createOtherSystemTables(PhoenixConnection metaConnection, 
HBaseAdmin hBaseAdmin) throws SQLException {
--- End diff --

nit: s/hBaseAdmin/hbaseAdmin/ camel-case typically isn't observed on 
"hbase" ;)


---


[jira] [Commented] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218367#comment-16218367
 ] 

Jepson commented on PHOENIX-4319:
-

[https://issues.apache.org/jira/browse/PHOENIX-4247]
[https://issues.apache.org/jira/browse/PHOENIX-4041]
[https://issues.apache.org/jira/browse/PHOENIX-3563]


> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4290) Full table scan performed for DELETE with table having immutable indexes

2017-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16218321#comment-16218321
 ] 

Hadoop QA commented on PHOENIX-4290:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12893900/PHOENIX-4290_wip2.patch
  against master branch at commit fe13b257e5dfe29581b1c3265d79596f194954cd.
  ATTACHMENT ID: 12893900

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+PreparedStatement psDelete = con.prepareStatement("DELETE FROM 
" + tableName + " WHERE (HOST, DOMAIN, FEATURE, \"DATE\") = (?,?,?,?)");
+rs = con.createStatement().executeQuery("SELECT /*+ NO_INDEX */ 
count(*) FROM " + tableName);
+private static MutationState deleteRows(StatementContext context, 
QueryPlan dataPlan, List indexTableRefs, ResultIterator iterator,
+public ImmutableBytesWritable getLatestValue(ColumnReference 
ref, long ts) throws IOException {
+valuePtr.set(cell.getValueArray(), cell.getValueOffset(), 
cell.getValueLength());
+// Create IndexMaintainer based on projected table 
(i.e. SELECT expressions) so that client-side
+IndexMaintainer maintainer = 
IndexMaintainer.create(projectedTable, indexTableRefs.get(i).getTable(), 
connection);
+indexPtr.set(maintainer.buildRowKey(getter, indexPtr, 
null, null, HConstants.LATEST_TIMESTAMP));
+MutationState state = deleteRows(ctx, dataQueryPlan, 
indexTableRefs, iterator, projector, sourceTableRef);
+List nonDisabledIndexes = 
Lists.newArrayListWithExpectedSize(table.getIndexes().size());

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexExtendedIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.GlobalImmutableNonTxIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.join.SubqueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ImmutableIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DeleteIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.tx.TxCheckpointIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.IndexMaintenanceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1570//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1570//console

This message is automatically generated.

> Full table scan performed for DELETE with table having immutable indexes
> 
>
> Key: PHOENIX-4290
> URL: https://issues.apache.org/jira/browse/PHOENIX-4290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4290_wip1.patch, PHOENIX-4290_wip2.patch
>
>
> If a DELETE command is issued with a partial match for the leading part of 
> the primary key, instead of using the data table, when the table has 
> immutable indexes, a full scan will occur against the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

Zookeeper connections:
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> Zookeeper connections:
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

*Zookeeper connections:*
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

Zookeeper connections:
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
[https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).

!https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest1")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"DW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:java}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !zookeeper 
> connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4319:

Description: 
*Code:*
{code:java}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]


  was:
*Code:*
{code:scala}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = 
> "192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest3")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "JYDW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> !zookeeper 
> connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4319) Zookeeper connection should be closed immediately

2017-10-25 Thread Jepson (JIRA)
Jepson created PHOENIX-4319:
---

 Summary: Zookeeper connection should be closed immediately
 Key: PHOENIX-4319
 URL: https://issues.apache.org/jira/browse/PHOENIX-4319
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
 Environment: phoenix4.10 hbase1.2.0
Reporter: Jepson


*Code:*
{code:scala}
val zkUrl = 
"192.168.17.37,192.168.17.38,192.168.17.40,192.168.17.41,192.168.17.42:2181"
val configuration = new Configuration()
configuration.set("hbase.zookeeper.quorum",zkUrl)

val spark = SparkSession
  .builder()
  .appName("SparkPhoenixTest3")
  .master("local[2]")
  .getOrCreate()


  for( a <- 1 to 100){
  val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
"JYDW.wms_do",
Array("WAREHOUSE_NO", "DO_NO"),
predicate = Some(
  """
|MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
|and MOD_TIME < TO_DATE('end_day', '-MM-dd')
  """.stripMargin.replaceAll("begin_day", 
"2017-10-01").replaceAll("end_day", "2017-10-25")),
conf = configuration
  )
  wms_doDF.show(100)
}
{code}

*Description:*
The connection to zookeeper is not getting closed,which causes the maximum 
number of client connections to be reached from a host( we have maxClientCnxns 
as 500 in zookeeper config).
!zookeeper 
connections|https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png!

*Reference:*
[https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4290) Full table scan performed for DELETE with table having immutable indexes

2017-10-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4290:
--
Attachment: PHOENIX-4290_wip2.patch

> Full table scan performed for DELETE with table having immutable indexes
> 
>
> Key: PHOENIX-4290
> URL: https://issues.apache.org/jira/browse/PHOENIX-4290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4290_wip1.patch, PHOENIX-4290_wip2.patch
>
>
> If a DELETE command is issued with a partial match for the leading part of 
> the primary key, instead of using the data table, when the table has 
> immutable indexes, a full scan will occur against the index.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)