[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-07 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466265#comment-16466265
 ] 

Ethan Wang commented on PHOENIX-4724:
-

[~xucang]

correct. But the distribution info is used at the moment of when splitting 
happens if I'm not mistaken. 

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-04 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16464280#comment-16464280
 ] 

Ethan Wang commented on PHOENIX-4724:
-

[~vincentpoon] I see. What happens when data table got mutated, what's the 
strategy for index table to sync up today.

With that thinking, at class EquiDepthStreamHistogram, besides addValue(), does 
this algorithm support removeValue() as well?

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-03 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463382#comment-16463382
 ] 

Ethan Wang commented on PHOENIX-4724:
-

[~vincentpoon]

If I understand correctly, with this feature implemented, when you build index 
table, you will at same time record some info into this histogram, so that in 
the future at some point you will conveniently get the distribution info of the 
index table. correct?

So do you store a histogram obj for each index table like a shadow obj some 
where off line? Also, will there every be case that you need mutate index or 
remove index from a existing index table?

Cool idea!

> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4724.v1.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-04-18 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443387#comment-16443387
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~jamestaylor] thanks!

(P.S. I pushed it to the branch before saw your +1 because I was told that 
cherry picks don't need additional +1)

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch, PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-04-18 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443345#comment-16443345
 ] 

Ethan Wang commented on PHOENIX-3837:
-

cherry pick for -PHOENIX-3837- has been applied on 4.x-HBase-1.1

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch, PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4696) Project icon at JIRA need to change: Phoenix jira icon is not accurate

2018-04-18 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4696:
---

 Summary: Project icon at JIRA need to change: Phoenix jira icon is 
not accurate
 Key: PHOENIX-4696
 URL: https://issues.apache.org/jira/browse/PHOENIX-4696
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ethan Wang


Just notice at the Jira page, the project icon is not accurate. purposing 
changing back to Phoenix project icon.

[https://issues.apache.org/jira/projects/PHOENIX/issues/|https://issues.apache.org/jira/projects/PHOENIX/issues/PHOENIX-684?filter=allopenissues]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-04-18 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443320#comment-16443320
 ] 

Ethan Wang commented on PHOENIX-3837:
-

Cherry picked PHOENIX-3837 for 4.x-HBase-1.1.

Change has been summarized into patch 
PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch.

[~jamestaylor] I saw you have migrated some part of PHOENIX-3837 together with 
PHOENIX-4605. This patch merged all the rest to 4.x-1.1.

 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch, PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-04-18 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-3837:

Attachment: PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837-cherrypick-4.x-HBase-1.1.patch, PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Support running multiple transaction providers

2018-04-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436520#comment-16436520
 ] 

Ethan Wang commented on PHOENIX-4605:
-

I tried to check the patch on master. I got error for two tables to be deleted. 
(caused by if we remove the file then no need to remove the content of the 
file?)

 

error: patch failed: 
phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionTable.java:1

error: 
phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionTable.java:
 patch does not apply

error: patch failed: 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TephraTransactionTable.java:1

error: 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TephraTransactionTable.java:
 patch does not apply

> Support running multiple transaction providers
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_v1.patch, PHOENIX-4605_v2.patch, 
> PHOENIX-4605_v3.patch, PHOENIX-4605_wip1.patch, PHOENIX-4605_wip2.patch, 
> PHOENIX_4605_wip3.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Support running multiple transaction providers

2018-04-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436521#comment-16436521
 ] 

Ethan Wang commented on PHOENIX-4605:
-

+1

> Support running multiple transaction providers
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_v1.patch, PHOENIX-4605_v2.patch, 
> PHOENIX-4605_v3.patch, PHOENIX-4605_wip1.patch, PHOENIX-4605_wip2.patch, 
> PHOENIX_4605_wip3.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Support running multiple transaction providers

2018-04-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436504#comment-16436504
 ] 

Ethan Wang commented on PHOENIX-4605:
-

I see. that make sense. 

I see lots of code changes is due to sys.cata schema change. 

LGTM

I will continue play around with this changes offline. 

 

Two questions

1, do we have a upgrade plan for people to update from older version to this 
version now with sys.cata change.

2, does this feature involves the input grammar changes..i.e, let's add a Jira 
for instruction if not exist yet.

 

> Support running multiple transaction providers
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_v1.patch, PHOENIX-4605_v2.patch, 
> PHOENIX-4605_wip1.patch, PHOENIX-4605_wip2.patch, PHOENIX_4605_wip3.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Support running multiple transaction providers

2018-04-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436433#comment-16436433
 ] 

Ethan Wang commented on PHOENIX-4605:
-

Just catching up the thread.

My understanding is that before this feature we only support Tephra, we now 
supporting both Tephra and Omid.

I saw you removed TephraTransactionTable.java, OmidTransactionTable.java. What 
are these two used for and how are we replacing the them.

FlappingTransactionIT and TransactionIT are we also going to include Omid?

 

 

> Support running multiple transaction providers
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_v1.patch, PHOENIX-4605_v2.patch, 
> PHOENIX-4605_wip1.patch, PHOENIX-4605_wip2.patch, PHOENIX_4605_wip3.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4605) Support running multiple transaction providers

2018-04-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16435831#comment-16435831
 ] 

Ethan Wang commented on PHOENIX-4605:
-

No problem. Looking at it. 

> Support running multiple transaction providers
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_v1.patch, PHOENIX-4605_wip1.patch, 
> PHOENIX-4605_wip2.patch, PHOENIX_4605_wip3.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4668) Remove unnecessary table descriptor modification for SPLIT_POLICY column

2018-04-04 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426272#comment-16426272
 ] 

Ethan Wang commented on PHOENIX-4668:
-

As HBASE-12570 has been resolved. +1 for this patch removing work around.

> Remove unnecessary table descriptor modification for SPLIT_POLICY column
> 
>
> Key: PHOENIX-4668
> URL: https://issues.apache.org/jira/browse/PHOENIX-4668
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4668.patch
>
>
> Inside _ConnectionQueryServicesImpl.ensureTableCreated()_, we modify the 
> table descriptor with
> newDesc.setValue(HTableDescriptor.SPLIT_POLICY, 
> MetaDataSplitPolicy.class.getName()), however we already have this mentioned 
> in the create statement DDL for system tables, so we can remove this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-21 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408362#comment-16408362
 ] 

Ethan Wang commented on PHOENIX-4370:
-

I noticed that Jenkins is SUCCESS in Phoenix-4.x-HBase-1.3 integration, but 
FAILURE at PreCommit-PHOENIX. is this a release blocker? 

[~mujtabachohan] please advice

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-14 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399501#comment-16399501
 ] 

Ethan Wang commented on PHOENIX-4370:
-

Patch applied on 

4.x-HBase-1.1
4.x-HBase-1.2
4.x-HBase-1.3
4.x-cdh5.11.2
5.x-HBase-2.0
master

[~tdsilva] [~jamestaylor]

 

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-14 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399219#comment-16399219
 ] 

Ethan Wang commented on PHOENIX-4231:
-

[~apurtell]

Thanks!

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2018-03-12 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396145#comment-16396145
 ] 

Ethan Wang commented on PHOENIX-4370:
-

[~tdsilva]

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Major
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4648) Allow Add jar "path" support hdfs

2018-03-09 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4648:
---

 Summary: Allow Add jar "path" support hdfs
 Key: PHOENIX-4648
 URL: https://issues.apache.org/jira/browse/PHOENIX-4648
 Project: Phoenix
  Issue Type: New Feature
Reporter: Ethan Wang


Allow Add jar "path" support hdfs

Currently, add jar "path" only support linux file system. ideally, we want to 
support

 

add jar "hdfs://ethanwang.me:9000/dir/a.jar"

it will copy and move the a.jar to local dynamic.jar.dir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4648) Allow Add jar "path" support hdfs

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393527#comment-16393527
 ] 

Ethan Wang commented on PHOENIX-4648:
-

[~ckulkarni]

[~chrajeshbab...@gmail.com]

> Allow Add jar "path" support hdfs
> -
>
> Key: PHOENIX-4648
> URL: https://issues.apache.org/jira/browse/PHOENIX-4648
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Priority: Major
>
> Allow Add jar "path" support hdfs
> Currently, add jar "path" only support linux file system. ideally, we want to 
> support
>  
> add jar "hdfs://ethanwang.me:9000/dir/a.jar"
> it will copy and move the a.jar to local dynamic.jar.dir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393524#comment-16393524
 ] 

Ethan Wang commented on PHOENIX-4231:
-

[~ckulkarni] thanks. done with Phoenix-4231-v2.patch.

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: PHOENIX-4231-v2.patch

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393477#comment-16393477
 ] 

Ethan Wang commented on PHOENIX-4231:
-

Thanks [~apurtell]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang edited comment on PHOENIX-4231 at 2/27/18 11:10 PM:
---

Please review the patch. [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni] [~jamestaylor] [~apurtell]


was (Author: aertoria):
Please review the patch. [~apurtell] [~rajeshbabu] [~jamestaylor]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang edited comment on PHOENIX-4231 at 2/27/18 11:09 PM:
---

Please review the patch. [~apurtell] [~rajeshbabu] [~jamestaylor]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]


was (Author: aertoria):
Please review the patch. [~apurtell] [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16379456#comment-16379456
 ] 

Ethan Wang commented on PHOENIX-4231:
-

Please review the patch. [~apurtell] [~rajeshbabu]
{quote}Either way, we want UDF loading to be restricted to one place only.
{quote}
This patch basically does exactly this.

FYI [~ckulkarni]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: PHOENIX-4231.patch

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: pom.xml

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-27 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: (was: pom.xml)

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1890) Provide queries for adding/deleting jars to/from common place in hdfs which is used by dynamic class loader

2018-02-26 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377823#comment-16377823
 ] 

Ethan Wang commented on PHOENIX-1890:
-

[~rajeshbabu]

I see. and it is not supporting something like:

add jars hdfs://localhost:9000/dir/a.jar

correct?

> Provide queries for adding/deleting jars to/from common place in hdfs which 
> is used by dynamic class loader
> ---
>
> Key: PHOENIX-1890
> URL: https://issues.apache.org/jira/browse/PHOENIX-1890
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.5.0
>
> Attachments: PHOENIX-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-26 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377520#comment-16377520
 ] 

Ethan Wang commented on PHOENIX-4231:
-

To summarize what we are going to be working on this jira: As of today, loading 
from add jars 'XX' allows loading the jars from any hdfs network location to 
local's hbase.dynamic.jars.dir. So we are now going to make addition features 
to PHOENIX-1890 so that:

when a property, say "loading.from.network.allowed" property is configured as 
false, "add jars" command only restricted to load from local file system or 
local hdfs.

Is the plan above accurate?

[~rajeshbabu]

[~apurtell]

Once this feature is done. user will be doing the following for creating a Udf:

Add jars hdfs://localhost/udf.jar

CREATE FUNCTION func(VARCHAR) returns VARCHAR as 'com.example.test.udfFunc' 
using jar 'configured/hbase/dynamic/jars/dir/udf2.jar'

 

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1890) Provide queries for adding/deleting jars to/from common place in hdfs which is used by dynamic class loader

2018-02-23 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375137#comment-16375137
 ] 

Ethan Wang commented on PHOENIX-1890:
-

Great work!

PS. I couldn't find documentation for this one. Do we have some grammar 
instruction somewhere already?

> Provide queries for adding/deleting jars to/from common place in hdfs which 
> is used by dynamic class loader
> ---
>
> Key: PHOENIX-1890
> URL: https://issues.apache.org/jira/browse/PHOENIX-1890
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.5.0
>
> Attachments: PHOENIX-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2018-02-23 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374975#comment-16374975
 ] 

Ethan Wang edited comment on PHOENIX-4361 at 2/23/18 9:19 PM:
--

[~rajeshbabu]

This JIRA also need to be back ported to 5.x-hbase-2.0. Just Done. FYI


was (Author: aertoria):
 

[~rajeshbabu]

This JIRA also need to be back ported to 5.x-hbase-2.0. Just Done. FYI

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2018-02-23 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374975#comment-16374975
 ] 

Ethan Wang commented on PHOENIX-4361:
-

 

[~rajeshbabu]

This JIRA also need to be back ported to 5.x-hbase-2.0. Just Done. FYI

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-02-23 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374974#comment-16374974
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~rajeshbabu]

Done. Just cherry-picked the patch to 5.x-HBase-2.0.

Note:

1, This patch PHOENIX-3837 requires PHOENIX-4361, so I also cherry picked 
PHOENIX-4361

2, This patch PHOENIX-3837 also requires PHOENIX-2566, part of it has not been 
committed to this branch. I did that too.

3, Hbase2.0 replaces TableDescriptor with HTableDescriptor. So I updated with 
TableDescriptor in my patch

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-02-20 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370681#comment-16370681
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~rajeshbabu]

is 5.x-HBase-2.0 branch broken right now? I did a fresh pull and mvn build 
failed at

 
/Users/ethan.wang/Desktop/FRESH/phoenix/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java:[254,93]
 cannot find symbol

[ERROR]   symbol:   variable indexMutations

[ERROR]   location: class org.apache.phoenix.compile.DeleteCompiler

 

Also, I saw PHOENIX-2566 has been back ported, but not completely. I will try 
to back port the rest of that patch.

"PHOENIX-2566 Support NOT NULL constraint for any column for immutable table 
(addendum 2)"

 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-02-20 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370533#comment-16370533
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~rajeshbabu]

is 5.x-HBase-2.0 the branch you will need to catch up? I'm try to make that in 
today 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2018-02-16 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16367725#comment-16367725
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~jamestaylor] [~elserj]

Let me know if we want to merge this patch into to 5.x branch.

It wasn't merged at that time because this branch was left behind by multiple 
other patches. Then discussed with [~tdsilva] we applied it to master and 4.x

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354825#comment-16354825
 ] 

Ethan Wang commented on PHOENIX-4231:
-

{quote}The setting hbase.dynamic.jars.dir can be used to restrict locations for 
jar loading but is only applied to jars loaded from the local filesystem.
{quote}
So as of today, what configuration that user need to set for 
hbase.dynamic.jars.dir, in order to restrict that only the jar from the local 
filesystem (not the network) is able to load in ?

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2018-01-05 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314149#comment-16314149
 ] 

Ethan Wang commented on PHOENIX-4489:
-

[~karanmehta93]
patch +1 

white space comment from [~jmahonin] +1 too. please consider remove them

> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4489.001.patch
>
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301956#comment-16301956
 ] 

Ethan Wang commented on PHOENIX-4488:
-

I see. Thanks!

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301922#comment-16301922
 ] 

Ethan Wang commented on PHOENIX-4488:
-

[~jamestaylor]
Pretty straight forward! lgtm.

Q1, Is MetaDataEndPointIT the IT that also covers execeededIndexQuota?
Q2, nit: Patch line 95 (code line 501), trailing whitespaces.

Thanks

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-12-17 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4370:

Attachment: PHOENIX-4370-v1.patch

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4442) Rename system catalog INDEX_STATE to STATE

2017-12-07 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282766#comment-16282766
 ] 

Ethan Wang commented on PHOENIX-4442:
-

[~vincentpoon]  I see. So this replication process is it like a back up 
replication?(master master replication?) Hbase in itself does not replicate 
unless it's hdfs level.

> Rename system catalog INDEX_STATE to STATE
> --
>
> Key: PHOENIX-4442
> URL: https://issues.apache.org/jira/browse/PHOENIX-4442
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Ethan Wang
>
> Rename system catalog INDEX_STATE to STATE so that this column can be used 
> for table as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4442) Rename system catalog INDEX_STATE to STATE

2017-12-07 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282690#comment-16282690
 ] 

Ethan Wang commented on PHOENIX-4442:
-

[~vincentpoon] I see. 
bq. That way queued replication edits could continue to drain at the HBase 
level on the destination, after which the table could be dropped.

regarding that, I think we may also use hbase pre-disable table hook. Actually, 
if I understand right, even you immediately drop a table from HBase, it still 
has to go through "this draining process", which will slowly replay the 
remaining WAL(since already ack-ed) and flush them. Is that already the case?

> Rename system catalog INDEX_STATE to STATE
> --
>
> Key: PHOENIX-4442
> URL: https://issues.apache.org/jira/browse/PHOENIX-4442
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Ethan Wang
>
> Rename system catalog INDEX_STATE to STATE so that this column can be used 
> for table as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-12-07 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282440#comment-16282440
 ] 

Ethan Wang commented on PHOENIX-3837:
-

Patch has been committed to 
4.x-0.98 (PHOENIX-3837.patch)
master (PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch)

[~tdsilva]

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-12-07 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1628#comment-1628
 ] 

Ethan Wang edited comment on PHOENIX-3837 at 12/7/17 8:02 PM:
--

PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch

Fetched and merged the latest changes so that it is appliable on master branch.


was (Author: aertoria):
PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch

Fetched and merged the latest changes so that it is appliable on 5.x-2.0 and 
master branch. 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4443) Documentation for Alter property on index feature

2017-12-07 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4443:
---

 Summary: Documentation for Alter property on index feature
 Key: PHOENIX-4443
 URL: https://issues.apache.org/jira/browse/PHOENIX-4443
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ethan Wang
Assignee: Ethan Wang


Documentation for Alter property on index feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4442) Rename system catalog INDEX_STATE to STATE

2017-12-07 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4442:
---

 Summary: Rename system catalog INDEX_STATE to STATE
 Key: PHOENIX-4442
 URL: https://issues.apache.org/jira/browse/PHOENIX-4442
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ethan Wang


Rename system catalog INDEX_STATE to STATE so that this column can be used for 
table as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-12-07 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-3837:

Attachment: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch

PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch

Fetched and merged the latest changes so that it is appliable on 5.x-2.0 and 
master branch. 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4406) Add option to disable tables when dropped

2017-12-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280626#comment-16280626
 ] 

Ethan Wang commented on PHOENIX-4406:
-

+1 on reusing INDEX_STATE. Was INDEX_STATE used for 
build/building/disable/active states for index currently. We can add similar 
state for tables/views. 

> Add option to disable tables when dropped
> -
>
> Key: PHOENIX-4406
> URL: https://issues.apache.org/jira/browse/PHOENIX-4406
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Priority: Critical
>
> Phoenix client applications can drop tables when HBase replication is 
> actively shipping edits.
> Add an option to disable the table in Phoenix's metadata when a DROP TABLE is 
> issued.
> This will allow the HBase table to be dropped by admin actions when it's safe 
> to do so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4406) Add option to disable tables when dropped

2017-12-06 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang reassigned PHOENIX-4406:
---

Assignee: Ethan Wang

> Add option to disable tables when dropped
> -
>
> Key: PHOENIX-4406
> URL: https://issues.apache.org/jira/browse/PHOENIX-4406
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Ethan Wang
>Priority: Critical
>
> Phoenix client applications can drop tables when HBase replication is 
> actively shipping edits.
> Add an option to disable the table in Phoenix's metadata when a DROP TABLE is 
> issued.
> This will allow the HBase table to be dropped by admin actions when it's safe 
> to do so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-12-01 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4370:

Description: 
Surface hbase metrics from perconnection to global metrics

Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
i.e., aggregate across all connections within in one JVM so that user can 
evaluate it as a stable metrics periodically.

COUNT_RPC_CALLS("rp", "Number of RPC calls"),
COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential next 
calls"),
COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
NotServingRegionException caught"),
COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result objects 
from region servers"),
COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
from remote region servers"),
COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
COUNT_ROWS_FILTERED("wf", "Number of rows filtered");

  was:Surface hbase metrics from perconnection to global metrics


> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-11-29 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251972#comment-16251972
 ] 

Ethan Wang edited comment on PHOENIX-4370 at 11/29/17 11:09 PM:


[~alexaraujo] [~samarthjain]


was (Author: aertoria):
[~alexaraujo]

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>
> Surface hbase metrics from perconnection to global metrics



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4406) Add option to disable tables when dropped

2017-11-29 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271418#comment-16271418
 ] 

Ethan Wang commented on PHOENIX-4406:
-

In the hbase, before desable a table, seems disable is a must (so that it stop 
taking writes and clean up WAL etc?). From phoenix, I don't see where is 
disabling request get called, is it part of DropTableRequest? (CQSI.dropTable)

If I understand right, disable/enable a table may be something like:
*Disable Table T:*
flush T
disable T
disable related view
disable related index
(disable stats?)
mark metadata as T disabled/update metadata cache

*Enable Table T:*
enable T
enable related view
enable/rebuild related index
(enable stats?)
mark metadata as T enabled/update metadata cache

Thoughts?

> Add option to disable tables when dropped
> -
>
> Key: PHOENIX-4406
> URL: https://issues.apache.org/jira/browse/PHOENIX-4406
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Priority: Critical
>
> Phoenix client applications can drop tables when HBase replication is 
> actively shipping edits.
> Add an option to disable the table in Phoenix's metadata when a DROP TABLE is 
> issued.
> This will allow the HBase table to be dropped by admin actions when it's safe 
> to do so.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-3837:

Attachment: PHOENIX-3837.patch

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-3837:

Attachment: (was: PHOENIX-3837.patch)

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-3837:

Attachment: PHOENIX-3837.patch

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269552#comment-16269552
 ] 

Ethan Wang edited comment on PHOENIX-3837 at 11/28/17 10:04 PM:


[~tdsilva] [~jamestaylor] plz review.


was (Author: aertoria):
[~tdsilva]

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240769#comment-16240769
 ] 

Ethan Wang edited comment on PHOENIX-3837 at 11/28/17 10:02 PM:


1, So the syntax will be 
bq. ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE GUIDE_POSTS_WIDTH = 20
bq.  ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE SET GUIDE_POSTS_WIDTH = 20

2, I'm also storing PTableType in AlterIndexStatement (i.e., PTableType.INDEX), 
as it is needed later on when inserting into catalog, when they inserting a new 
row with a new seqnum, they need table typle.

3, I assume both Phoenix and Hbase properties (e.g., TTL) will likely be 
allowed through this correct?
[~giacomotaylor] 


was (Author: aertoria):
1, So the syntax will be 
bq. ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE GUIDE_POSTS_WIDTH = 20
bq.  ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE SET GUIDE_POSTS_WIDTH = 20

2, I'm also storing PTableType in AlterIndexStatement (i.e., PTableType.INDEX), 
as it is needed later on when inserting into catalog, when they inserting a new 
row with a new seqnum, they need table typle.

3, I assume both Phoenix and Hbase properties (e.g., TTL) will likely be 
allowed through this correct?
[~giacomotaylor] 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240769#comment-16240769
 ] 

Ethan Wang edited comment on PHOENIX-3837 at 11/28/17 9:31 PM:
---

1, So the syntax will be 
bq. ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE GUIDE_POSTS_WIDTH = 20
bq.  ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE SET GUIDE_POSTS_WIDTH = 20

2, I'm also storing PTableType in AlterIndexStatement (i.e., PTableType.INDEX), 
as it is needed later on when inserting into catalog, when they inserting a new 
row with a new seqnum, they need table typle.

3, I assume both Phoenix and Hbase properties (e.g., TTL) will likely be 
allowed through this correct?
[~giacomotaylor] 


was (Author: aertoria):
1, So the syntax will be 
bq. ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE GUIDE_POSTS_WIDTH = 20
bq.  ALTER INDEX LOCAL_ADDRESS ON PERSON SET GUIDE_POSTS_WIDTH = 20

2, I'm also storing PTableType in AlterIndexStatement (i.e., PTableType.INDEX), 
as it is needed later on when inserting into catalog, when they inserting a new 
row with a new seqnum, they need table typle.

3, I assume both Phoenix and Hbase properties (e.g., TTL) will likely be 
allowed through this correct?
[~giacomotaylor] 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-17 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257462#comment-16257462
 ] 

Ethan Wang commented on PHOENIX-4361:
-

patch applied to:
master
4.x-0.98
5.x-2.0

Thanks [~ckulkarni]

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-17 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang resolved PHOENIX-4361.
-
Resolution: Fixed

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-17 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257365#comment-16257365
 ] 

Ethan Wang commented on PHOENIX-4361:
-

+1

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-16 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256080#comment-16256080
 ] 

Ethan Wang commented on PHOENIX-4361:
-

Oh I see. I thought it was just format messed up. 

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-16 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255997#comment-16255997
 ] 

Ethan Wang commented on PHOENIX-4361:
-

Also, you many change this jira progress to "in progress"

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-16 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255996#comment-16255996
 ] 

Ethan Wang commented on PHOENIX-4361:
-

[~ckulkarni] nit: There are some extra newlines. will be great if you undo them.

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-11-14 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251972#comment-16251972
 ] 

Ethan Wang commented on PHOENIX-4370:
-

[~alexaraujo]

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>
> Surface hbase metrics from perconnection to global metrics



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-10 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248082#comment-16248082
 ] 

Ethan Wang commented on PHOENIX-4361:
-

[~ckulkarni]

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-11-10 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4370:
---

 Summary: Surface hbase metrics from perconnection to global metrics
 Key: PHOENIX-4370
 URL: https://issues.apache.org/jira/browse/PHOENIX-4370
 Project: Phoenix
  Issue Type: Bug
Reporter: Ethan Wang
Assignee: Ethan Wang


Surface hbase metrics from perconnection to global metrics



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-08 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang reassigned PHOENIX-4361:
---

Assignee: Ethan Wang

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-08 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245115#comment-16245115
 ] 

Ethan Wang commented on PHOENIX-4361:
-

[~tdsilva]

> Remove redundant argument in separateAndValidateProperties in CQSI
> --
>
> Key: PHOENIX-4361
> URL: https://issues.apache.org/jira/browse/PHOENIX-4361
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Priority: Minor
>
> Remove redundant argument in separateAndValidateProperties in CQSI
> private Pair 
> separateAndValidateProperties(PTable table, Map Object>>> properties, Set colFamiliesForPColumnsToBeAdded, 
> List>> families, Map 
> tableProps) 
> Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4361) Remove redundant argument in separateAndValidateProperties in CQSI

2017-11-08 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4361:
---

 Summary: Remove redundant argument in 
separateAndValidateProperties in CQSI
 Key: PHOENIX-4361
 URL: https://issues.apache.org/jira/browse/PHOENIX-4361
 Project: Phoenix
  Issue Type: Bug
Reporter: Ethan Wang
Priority: Minor


Remove redundant argument in separateAndValidateProperties in CQSI

private Pair 
separateAndValidateProperties(PTable table, Map>> properties, Set colFamiliesForPColumnsToBeAdded, 
List>> families, Map 
tableProps) 

Map>> families was never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240769#comment-16240769
 ] 

Ethan Wang commented on PHOENIX-3837:
-

1, So the syntax will be 
bq. ALTER INDEX LOCAL_ADDRESS ON PERSON ACTIVE GUIDE_POSTS_WIDTH = 20
bq.  ALTER INDEX LOCAL_ADDRESS ON PERSON SET GUIDE_POSTS_WIDTH = 20

2, I'm also storing PTableType in AlterIndexStatement (i.e., PTableType.INDEX), 
as it is needed later on when inserting into catalog, when they inserting a new 
row with a new seqnum, they need table typle.

3, I assume both Phoenix and Hbase properties (e.g., TTL) will likely be 
allowed through this correct?
[~giacomotaylor] 

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4315) function Greatest/Least

2017-11-03 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237135#comment-16237135
 ] 

Ethan Wang commented on PHOENIX-4315:
-

[~giacomotaylor]

> function Greatest/Least
> ---
>
> Key: PHOENIX-4315
> URL: https://issues.apache.org/jira/browse/PHOENIX-4315
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Priority: Major
>
> Resolve as the greatest value among a collection of projections.
> e.g.,
> Select greatest(A, B) from table  
> Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-01 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang reassigned PHOENIX-3837:
---

Assignee: Ethan Wang

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
>Priority: Major
> Fix For: 4.13.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4345) Error message for incorrect index is not accurate

2017-11-01 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4345:
---

 Summary: Error message for incorrect index is not accurate
 Key: PHOENIX-4345
 URL: https://issues.apache.org/jira/browse/PHOENIX-4345
 Project: Phoenix
  Issue Type: Bug
Reporter: Ethan Wang
 Fix For: 4.13.0


Error message for incorrect index is not accurate. it shows "table undefined". 
rather, should be index undefined.

Table name: PERSON
Index name: LOCAL_ADDRESS


0: jdbc:phoenix:localhost:2181:/hbase> ALTER INDEX LOCAL_ADDRESSX ON PERSON 
rebuild;
Error: ERROR 1012 (42M03): Table undefined. tableName=LOCAL_ADDRESSX 
(state=42M03,code=1012)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-01 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234837#comment-16234837
 ] 

Ethan Wang edited comment on PHOENIX-3837 at 11/1/17 10:11 PM:
---

[~gjacoby] In those cases, is it possible for user to firstly update the 
GUIDE_POSTS_WIDTH on the data table, and then do ALTER INDEX IDX_T ON T REBUILD 
to refresh?


was (Author: aertoria):
[~gjacoby] Just wondering, in those cases, can user firstly update the 
GUIDE_POSTS_WIDTH on the data table, and then do ALTER INDEX IDX_T ON T REBUILD 
to refresh?

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Priority: Major
> Fix For: 4.13.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-01 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16234837#comment-16234837
 ] 

Ethan Wang commented on PHOENIX-3837:
-

[~gjacoby] Just wondering, in those cases, can user firstly update the 
GUIDE_POSTS_WIDTH on the data table, and then do ALTER INDEX IDX_T ON T REBUILD 
to refresh?

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Priority: Major
> Fix For: 4.13.0
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4328) Support clients having different "phoenix.schema.mapSystemTablesToNamespace" property

2017-10-30 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16226068#comment-16226068
 ] 

Ethan Wang commented on PHOENIX-4328:
-

bq. The first client with phoenix.schema.mapSystemTablesToNamespace true will 
acquire lock in SYSMUTEX and migrate the system tables. As soon as this 
happens, all the other clients will start failing.

So all other clients with "phoenix.schema.isNamespaceMappingEnabled" set to 
true or false will both failing while one client is upgrading from sys. to sys: 
?

> Support clients having different "phoenix.schema.mapSystemTablesToNamespace" 
> property
> -
>
> Key: PHOENIX-4328
> URL: https://issues.apache.org/jira/browse/PHOENIX-4328
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>
> Imagine a scenario when we enable namespaces for phoenix on the server side 
> and set the property {{phoenix.schema.isNamespaceMappingEnabled}} to true. A 
> bunch of clients are trying to connect to this cluster. All of these clients 
> have 
> {{phoenix.schema.isNamespaceMappingEnabled}} to true, however 
>  for some of them {{phoenix.schema.isNamespaceMappingEnabled}} is set to 
> false and it is true for others. (A typical case for rolling upgrade.)
> The first client with {{phoenix.schema.mapSystemTablesToNamespace}} true will 
> acquire lock in SYSMUTEX and migrate the system tables. As soon as this 
> happens, all the other clients will start failing. 
> There are two scenarios here.
> 1. A new client trying to connect to server without this property set
> This will fail since the ConnectionQueryServicesImpl checks if SYSCAT is 
> namespace mapped or not, If there is a mismatch, it throws an exception, thus 
> the client doesn't get any connection.
> 2. Clients already connected to cluster but don't have this property set
> This will fail because every query calls the endpoint coprocessor on SYSCAT 
> to determine the PTable of the query table and the physical HBase table name 
> is resolved based on the properties. Thus, we try to call the method on 
> SYSCAT instead of SYS:CAT and it results in a TableNotFoundException.
> This JIRA is to discuss about the potential ways in which we can handle this 
> issue.
> Some ideas around this after discussing with [~twdsi...@gmail.com]:
> 1. Build retry logic around the code that works with SYSTEM tables 
> (coprocessor calls etc.) Try with SYSCAT and if it fails, try with SYS:CAT
> Cons: Difficult to maintain and code scattered all over. 
> 2. Use SchemaUtil.getPhyscialTableName method to return the table name that 
> actually exists. (Only for SYSTEM tables)
> Call admin.tableExists to determine if SYSCAT or SYS:CAT exists and return 
> that name. The client properties get ignored on this one. 
> Cons: Expensive call every time, since this method is always called several 
> times.
> [~jamestaylor] [~elserj] [~an...@apache.org] [~apurtell] 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3999) Optimize inner joins as SKIP-SCAN-JOIN when possible

2017-10-23 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215840#comment-16215840
 ] 

Ethan Wang commented on PHOENIX-3999:
-

(Maybe  irrelevant) Is time out enforced for a query as a whole?  If we only 
care the timeout for each rpc call for each parallel scans, should it be more 
efficient  to leave it to parallescanner to deicide how to chunk the scans.

> Optimize inner joins as SKIP-SCAN-JOIN when possible
> 
>
> Key: PHOENIX-3999
> URL: https://issues.apache.org/jira/browse/PHOENIX-3999
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>
> Semi joins on the leading part of the primary key end up doing batches of 
> point queries (as opposed to a broadcast hash join), however inner joins do 
> not.
> Here's a set of example schemas that executes a skip scan on the inner query:
> {code}
> CREATE TABLE COMPLETED_BATCHES (
> BATCH_SEQUENCE_NUM BIGINT NOT NULL,
> BATCH_ID   BIGINT NOT NULL,
> CONSTRAINT PK PRIMARY KEY
> (
> BATCH_SEQUENCE_NUM,
> BATCH_ID
> )
> );
> CREATE TABLE ITEMS (
>BATCH_ID BIGINT NOT NULL,
>ITEM_ID BIGINT NOT NULL,
>ITEM_TYPE BIGINT,
>ITEM_VALUE VARCHAR,
>CONSTRAINT PK PRIMARY KEY
>(
> BATCH_ID,
> ITEM_ID
>)
> );
> CREATE TABLE COMPLETED_ITEMS (
>ITEM_TYPE  BIGINT NOT NULL,
>BATCH_SEQUENCE_NUM BIGINT NOT NULL,
>ITEM_IDBIGINT NOT NULL,
>ITEM_VALUE VARCHAR,
>CONSTRAINT PK PRIMARY KEY
>(
>   ITEM_TYPE,
>   BATCH_SEQUENCE_NUM,  
>   ITEM_ID
>)
> );
> {code}
> The explain plan of these indicate that a dynamic filter will be performed 
> like this:
> {code}
> UPSERT SELECT
> CLIENT PARALLEL 1-WAY FULL SCAN OVER ITEMS
> SKIP-SCAN-JOIN TABLE 0
> CLIENT PARALLEL 1-WAY RANGE SCAN OVER COMPLETED_BATCHES [1] - [2]
> SERVER FILTER BY FIRST KEY ONLY
> SERVER AGGREGATE INTO DISTINCT ROWS BY [BATCH_ID]
> CLIENT MERGE SORT
> DYNAMIC SERVER FILTER BY I.BATCH_ID IN ($8.$9)
> {code}
> We should also be able to leverage this optimization when an inner join is 
> used such as this:
> {code}
> UPSERT INTO COMPLETED_ITEMS (ITEM_TYPE, BATCH_SEQUENCE_NUM, ITEM_ID, 
> ITEM_VALUE)
>SELECT i.ITEM_TYPE, b.BATCH_SEQUENCE_NUM, i.ITEM_ID, i.ITEM_VALUE   
>FROM  ITEMS i, COMPLETED_BATCHES b
>WHERE b.BATCH_ID = i.BATCH_ID AND  
>b.BATCH_SEQUENCE_NUM > 1000 AND b.BATCH_SEQUENCE_NUM < 2000;
> {code}
> A complete unit test looks like this:
> {code}
> @Test
> public void testNestedLoopJoin() throws Exception {
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> String t1="COMPLETED_BATCHES";
> String ddl1 = "CREATE TABLE " + t1 + " (\n" + 
> "BATCH_SEQUENCE_NUM BIGINT NOT NULL,\n" + 
> "BATCH_ID   BIGINT NOT NULL,\n" + 
> "CONSTRAINT PK PRIMARY KEY\n" + 
> "(\n" + 
> "BATCH_SEQUENCE_NUM,\n" + 
> "BATCH_ID\n" + 
> ")\n" + 
> ")" + 
> "";
> conn.createStatement().execute(ddl1);
> 
> String t2="ITEMS";
> String ddl2 = "CREATE TABLE " + t2 + " (\n" + 
> "   BATCH_ID BIGINT NOT NULL,\n" + 
> "   ITEM_ID BIGINT NOT NULL,\n" + 
> "   ITEM_TYPE BIGINT,\n" + 
> "   ITEM_VALUE VARCHAR,\n" + 
> "   CONSTRAINT PK PRIMARY KEY\n" + 
> "   (\n" + 
> "BATCH_ID,\n" + 
> "ITEM_ID\n" + 
> "   )\n" + 
> ")";
> conn.createStatement().execute(ddl2);
> String t3="COMPLETED_ITEMS";
> String ddl3 = "CREATE TABLE " + t3 + "(\n" + 
> "   ITEM_TYPE  BIGINT NOT NULL,\n" + 
> "   BATCH_SEQUENCE_NUM BIGINT NOT NULL,\n" + 
> "   ITEM_IDBIGINT NOT NULL,\n" + 
> "   ITEM_VALUE VARCHAR,\n" + 
> "   CONSTRAINT PK PRIMARY KEY\n" + 
> "   (\n" + 
> "  ITEM_TYPE,\n" + 
> "  BATCH_SEQUENCE_NUM,  \n" + 
> "  ITEM_ID\n" + 
> "   )\n" + 
> ")";
> conn.createStatement().execute(ddl3);
> conn.createStatement().execute("UPSERT INTO 
> "+t1+"(BATCH_SEQUENCE_NUM, batch_id) VALUES (1,2)");
> conn.createStatement().execute("UPSERT INTO 
> 

[jira] [Assigned] (PHOENIX-4315) function Greatest/Least

2017-10-23 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang reassigned PHOENIX-4315:
---

Assignee: (was: Ethan Wang)

> function Greatest/Least
> ---
>
> Key: PHOENIX-4315
> URL: https://issues.apache.org/jira/browse/PHOENIX-4315
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>
> Resolve as the greatest value among a collection of projections.
> e.g.,
> Select greatest(A, B) from table  
> Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4315) function Greatest/Least

2017-10-23 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang reassigned PHOENIX-4315:
---

Assignee: Ethan Wang

> function Greatest/Least
> ---
>
> Key: PHOENIX-4315
> URL: https://issues.apache.org/jira/browse/PHOENIX-4315
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Assignee: Ethan Wang
>
> Resolve as the greatest value among a collection of projections.
> e.g.,
> Select greatest(A, B) from table  
> Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4315) function Greatest/Least

2017-10-23 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4315:
---

 Summary: function Greatest/Least
 Key: PHOENIX-4315
 URL: https://issues.apache.org/jira/browse/PHOENIX-4315
 Project: Phoenix
  Issue Type: New Feature
Reporter: Ethan Wang


Resolve as the greatest value among a collection of projections.

e.g.,
Select greatest(A, B) from table  
Select greatest(1,2) 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2017-10-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214228#comment-16214228
 ] 

Ethan Wang edited comment on PHOENIX-2903 at 10/22/17 8:16 AM:
---

Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).


After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  


My understand is that, the first process of above two is what this item is 
focusing on. 


Don't know [~jamestaylor] maybe you have already tried to reproduced this NSRE 
for the first scenario. Basically, at GroupedAggregateRegionObserver, a 
preSplit and a postCompleteSplit hook are added. So that when a splits starts 
or ends, it should be able to locked down with how aggregation process. So that 
we can reproduce in this sequence:
   

 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes


For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], a right test case should be to trigger a split starts after 
rs.next() gets called, but before the logic of next() gets executed. Something 
like


GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?


was (Author: aertoria):
Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).


After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  


My understand is that, the first process of above two is what this item is 
focusing on. 


Don't know [~jamestaylor] maybe you have already tried to reproduced this NSRE 
for the first scenario. Basically, at GroupedAggregateRegionObserver, a 
preSplit and a postCompleteSplit hook are added. So that when a splits starts 
or ends, it should be able to locked down with how aggregation process. So that 
we can reproduce in this sequence:
   

 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes


For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like


GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, 

[jira] [Comment Edited] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2017-10-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214228#comment-16214228
 ] 

Ethan Wang edited comment on PHOENIX-2903 at 10/22/17 8:15 AM:
---

Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).

After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  

My understand is that, the first process of above two is what this item is 
focusing on. 

Don't know [~jamestaylor] maybe you have already tried to reproduced this NSRE 
for the first scenario. Basically, at GroupedAggregateRegionObserver, a 
preSplit and a postCompleteSplit hook are added. So that when a splits starts 
or ends, it should be able to locked down with how aggregation process. So that 
we can reproduce in this sequence:
   
 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes

For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like

GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?


was (Author: aertoria):
Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).

After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  

My understand is that, the first process of above two is what this item is 
focusing on. 

Don't know [~jamestaylor] if you have already tried to reproduced this NSRE for 
first scenario. 
So at GroupedAggregateRegionObserver, a preSplit and a postCompleteSplit hook 
are added. So that when a splits starts or ends, it should be able to locked 
down with how aggregation process. So that we can reproduce in this sequence:
   
 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes

For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like

GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_v4_wip.patch, 

[jira] [Comment Edited] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2017-10-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214228#comment-16214228
 ] 

Ethan Wang edited comment on PHOENIX-2903 at 10/22/17 8:15 AM:
---

Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).


After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  


My understand is that, the first process of above two is what this item is 
focusing on. 


Don't know [~jamestaylor] maybe you have already tried to reproduced this NSRE 
for the first scenario. Basically, at GroupedAggregateRegionObserver, a 
preSplit and a postCompleteSplit hook are added. So that when a splits starts 
or ends, it should be able to locked down with how aggregation process. So that 
we can reproduce in this sequence:
   

 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes


For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like


GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?


was (Author: aertoria):
Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).

After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  

My understand is that, the first process of above two is what this item is 
focusing on. 

Don't know [~jamestaylor] maybe you have already tried to reproduced this NSRE 
for the first scenario. Basically, at GroupedAggregateRegionObserver, a 
preSplit and a postCompleteSplit hook are added. So that when a splits starts 
or ends, it should be able to locked down with how aggregation process. So that 
we can reproduce in this sequence:
   
 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes

For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like

GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, 

[jira] [Commented] (PHOENIX-2903) Handle split during scan for row key ordered aggregations

2017-10-22 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214228#comment-16214228
 ] 

Ethan Wang commented on PHOENIX-2903:
-

Nvm the question from my last comments. Now I see, during BaseResultIterators, 
each scan will be send over to server side to have a scanner prepared. During 
this process, if split happens, NSRE will be caught and new scans will be 
prepared and tried. The process will be recursively tried until all scanner 
come back OK (or retriedCount reduced to 0).

After this point, if a split happens, and one or two scanner gets impacted, 
during TableResultIterator another NRSE will be caught. in this case it will 
simply throw out the StaleRegionBoundaryCacheException exception.  

My understand is that, the first process of above two is what this item is 
focusing on. 

Don't know [~jamestaylor] if you have already tried to reproduced this NSRE for 
first scenario. 
So at GroupedAggregateRegionObserver, a preSplit and a postCompleteSplit hook 
are added. So that when a splits starts or ends, it should be able to locked 
down with how aggregation process. So that we can reproduce in this sequence:
   
 PrepareScanner ->  Scanner.next()  -> Splits starts -> Split ends -> 
ServerSide Aggregation -> Aggregation finishes

For Ordered, since the actual aggregation *will not start* until the first 
rs.next() gets called. (unOrdered, in comparison, will have everything 
aggregated, under the protection of regionLock). So after dicussed with 
[~vincentpoon], I'm thinking a right test case should be to trigger a split 
starts after rs.next() gets called, but before the logic of next() gets 
executed. Something like

GroupedAggregateRegionObserver
{code:java}
RegionScanner doPostScannerOpen(){
.
return new BaseRegionScanner(scanner) {
private long rowCount = 0;
private ImmutableBytesPtr currentKey = null;

@Override
public boolean next(List results) throws IOException {
permitToSplit.unlock(); // signal pre-split hook can start 
splitting now
splitFinishes.lock();  //wait till split finishes, the continue 
the rest 
 .
{code}

Thoughts?

> Handle split during scan for row key ordered aggregations
> -
>
> Key: PHOENIX-2903
> URL: https://issues.apache.org/jira/browse/PHOENIX-2903
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.13.0
>
> Attachments: PHOENIX-2903_v1.patch, PHOENIX-2903_v2.patch, 
> PHOENIX-2903_v3.patch, PHOENIX-2903_v4_wip.patch, PHOENIX-2903_v5_wip.patch, 
> PHOENIX-2903_wip.patch
>
>
> Currently a hole in our split detection code



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-16 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206351#comment-16206351
 ] 

Ethan Wang commented on PHOENIX-4283:
-

Patch applied for 4.12-HBase-0.98, 4.x-HBase-0.98, and master.

Thanks [~tdsilva]!

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v3.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205446#comment-16205446
 ] 

Ethan Wang edited comment on PHOENIX-4283 at 10/16/17 5:30 AM:
---

Thanks [~jamestaylor] I should've be more careful. 
I will apply the patch when CI finishes.


was (Author: aertoria):
Thanks [~jamestaylor] I should've be more careful. 

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v3.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: (was: PHOENIX-4283-v2.patch)

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v3.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: PHOENIX-4283-v3.patch

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v3.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205446#comment-16205446
 ] 

Ethan Wang commented on PHOENIX-4283:
-

Thanks [~jamestaylor] I should've be more careful. 

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v2.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205438#comment-16205438
 ] 

Ethan Wang edited comment on PHOENIX-4283 at 10/16/17 5:18 AM:
---

[~jamestaylor] thanks!


was (Author: aertoria):
[~jamestaylor] +1

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v2.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205438#comment-16205438
 ] 

Ethan Wang commented on PHOENIX-4283:
-

[~jamestaylor] +1

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v2.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: PHOENIX-4283-v2.patch

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v2.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: (was: PHOENIX-4283-v1.patch)

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: PHOENIX-4283-v1.patch

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283-v1.patch, PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16205420#comment-16205420
 ] 

Ethan Wang commented on PHOENIX-4283:
-

[~jamestaylor] Make sense. I go prepare another patch. Thanks 

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4283) Group By statement truncating BIGINTs

2017-10-15 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4283:

Attachment: PHOENIX-4283.patch

> Group By statement truncating BIGINTs
> -
>
> Key: PHOENIX-4283
> URL: https://issues.apache.org/jira/browse/PHOENIX-4283
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Steven Sadowski
>Assignee: Ethan Wang
> Fix For: 4.12.1
>
> Attachments: PHOENIX-4283.patch
>
>
> *Versions:*
> Phoenix 4.11.0
> HBase: 1.3.1
> (Amazon EMR: 5.8.0)
> *Steps to reproduce:*
> 1. From the `sqlline-thin.py` client setup the following table:
> {code:sql}
> CREATE TABLE test_table (
> a BIGINT NOT NULL, 
> c BIGINT NOT NULL
> CONSTRAINT PK PRIMARY KEY (a, c)
> );
> UPSERT INTO test_table(a,c) VALUES(444, 555);
> SELECT a FROM (SELECT a, c FROM test_table GROUP BY a, c) GROUP BY a, c;
> {code}
> *Expected Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 444  |
> +--+
> {code}
> *Actual Result:*
> {code:sql}
> +--+
> |  A   |
> +--+
> | 400  |
> +--+
> {code}
> *Comments:*
> Having the two Group By statements together seems to truncate the last 6 or 
> so digits of the final result. Removing the outer (or either) group by will 
> produce the correct result.
> Please fix the Group by statement to not truncate the outer result's value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >