[jira] [Updated] (PHOENIX-5067) Support for secure Phoenix cluster in Pherf

2018-12-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Attachment: PHOENIX-5067-4.x-HBase-1.1.patch

> Support for secure Phoenix cluster in Pherf
> ---
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1, 
> PHOENIX-5067-4.x-HBase-1.1.patch
>
>
> Currently Phoenix performance and functional testing tool {{Pherf}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5067) Support for secure Phoenix cluster in Phref

2018-12-14 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-5067:
--

Assignee: Biju Nair

> Support for secure Phoenix cluster in Phref
> ---
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Support for secure Phoenix cluster in Phref

2018-12-13 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Summary: Support for secure Phoenix cluster in Phref  (was: Phref can't be 
used with a secure Phoenix cluster)

> Support for secure Phoenix cluster in Phref
> ---
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-13 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Priority: Minor  (was: Major)

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-13 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Issue Type: Improvement  (was: Bug)

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Priority: Major
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-13 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Attachment: PHOENIX-5067-4.x-HBase-1.1

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Major
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-13 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Attachment: (was: PHOENIX-5067-4.x-HBase-1.1)

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Major
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-12 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-5067:
--

 Summary: Phref can't be used with a secure Phoenix cluster
 Key: PHOENIX-5067
 URL: https://issues.apache.org/jira/browse/PHOENIX-5067
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair


Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
allow options to pass in Kerberos principal and Keytab to connect to a secure 
(Kerberized) Phoenix cluster. This prevents running the tool against the 
Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-12 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Description: Currently Phoenix performance and functional testing tool 
{{Phref}} doesn't have options to pass in Kerberos principal and Keytab to 
connect to a secure (Kerberized) Phoenix cluster. This prevents running the 
tool against a Kerberized clusters.  (was: Currently Phoenix performance and 
functional testing tool {{Phref}} doesn't allow options to pass in Kerberos 
principal and Keytab to connect to a secure (Kerberized) Phoenix cluster. This 
prevents running the tool against the Kerberized clusters.)

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Major
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Phref can't be used with a secure Phoenix cluster

2018-12-12 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5067:
---
Attachment: PHOENIX-5067-4.x-HBase-1.1

> Phref can't be used with a secure Phoenix cluster
> -
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Major
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5006) jdbc connection to secure cluster should be able to use Kerberos ticket of user

2018-11-28 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5006:
---
Attachment: PHOENIX-5006.possiblefix

> jdbc connection to secure cluster should be able to use Kerberos ticket of 
> user
> ---
>
> Key: PHOENIX-5006
> URL: https://issues.apache.org/jira/browse/PHOENIX-5006
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5006.possiblefix
>
>
> Currently JDBC connection against a secure Phoenix cluster requires a 
> Kerberos principal and keytab to be passed in as part of the connection 
> string. But in many instances users may not have a {{Keytab}} especially 
> during development. It would be good to support using the logged in users 
> Kerberos ticket. 
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5006) jdbc connection to secure cluster should be able to use Kerberos ticket of user

2018-11-07 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-5006:
---
Description: 
Currently JDBC connection against a secure Phoenix cluster requires a Kerberos 
principal and keytab to be passed in as part of the connection string. But in 
many instances users may not have a {{Keytab}} especially during development. 
It would be good to support using the logged in users Kerberos ticket. 
  

  was:
Currently JDBC connection against a secure Phoenix cluster requires a Kerberos 
principal and keytab to be passed in as part of the connection string. But in 
many instances users may not have a {{Keytab}} especially during development. 
It would be good to support using the logged users Kerberos ticket. 
 


> jdbc connection to secure cluster should be able to use Kerberos ticket of 
> user
> ---
>
> Key: PHOENIX-5006
> URL: https://issues.apache.org/jira/browse/PHOENIX-5006
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> Currently JDBC connection against a secure Phoenix cluster requires a 
> Kerberos principal and keytab to be passed in as part of the connection 
> string. But in many instances users may not have a {{Keytab}} especially 
> during development. It would be good to support using the logged in users 
> Kerberos ticket. 
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5006) jdbc connection to secure cluster should be able to use Kerberos ticket of user

2018-11-07 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-5006:
--

 Summary: jdbc connection to secure cluster should be able to use 
Kerberos ticket of user
 Key: PHOENIX-5006
 URL: https://issues.apache.org/jira/browse/PHOENIX-5006
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair


Currently JDBC connection against a secure Phoenix cluster requires a Kerberos 
principal and keytab to be passed in as part of the connection string. But in 
many instances users may not have a {{Keytab}} especially during development. 
It would be good to support using the logged users Kerberos ticket. 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4002) Document FETCH NEXT| n ROWS from Cursor

2017-08-30 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-4002:
---
Attachment: PHOENIX-4002-WIP.PATCH

> Document FETCH NEXT| n ROWS from Cursor
> ---
>
> Key: PHOENIX-4002
> URL: https://issues.apache.org/jira/browse/PHOENIX-4002
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Biju Nair
> Attachments: PHOENIX-4002-WIP.PATCH
>
>
> Now that PHOENIX-3572 is resolved and released, we need to add documentation 
> for this new functionality on our website. For directions on how to do that, 
> see http://phoenix.apache.org/building_website.html. I'd recommend adding a 
> new top level page linked off of our Features menu that explains from a users 
> perspective how to use it, and also updating our reference grammar here 
> (which is derived from content in phoenix.csv): 
> http://phoenix.apache.org/language/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4002) Document FETCH NEXT| n ROWS from Cursor

2017-08-30 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-4002:
---
Attachment: PATCH-4002-WIP.PATCH

Attaching the {{svn}} diff as a patch. Please let me know if there is another 
process that need to be followed for doc patches.

> Document FETCH NEXT| n ROWS from Cursor
> ---
>
> Key: PHOENIX-4002
> URL: https://issues.apache.org/jira/browse/PHOENIX-4002
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Biju Nair
> Attachments: PATCH-4002-WIP.PATCH
>
>
> Now that PHOENIX-3572 is resolved and released, we need to add documentation 
> for this new functionality on our website. For directions on how to do that, 
> see http://phoenix.apache.org/building_website.html. I'd recommend adding a 
> new top level page linked off of our Features menu that explains from a users 
> perspective how to use it, and also updating our reference grammar here 
> (which is derived from content in phoenix.csv): 
> http://phoenix.apache.org/language/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3962) Not creating the table in custom schema

2017-06-20 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056411#comment-16056411
 ] 

Biju Nair commented on PHOENIX-3962:


Did you check/follow the instructions for {{namespace}} mapping provided 
[here|http://phoenix.apache.org/namspace_mapping.html]?

> Not creating the table in custom schema
> ---
>
> Key: PHOENIX-3962
> URL: https://issues.apache.org/jira/browse/PHOENIX-3962
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase : 1.2.6
> Phoenix : 4.10.0-Hbase-1.2.0
>Reporter: vishal
>
> I am trying to create a table MYTAB1 in my schema/namespace MYTEST.
> But instead of creating the table in that namespace it is creating it in 
> default namespace with the table name  *MYTEST.MYTAB1*.
> This not my requirement.
> I have done like this:
> 1) hbase(main):059:0> create_namespace 'MYTEST'
> 2)hbase(main):059:0> list_namespace_tables 'MYTEST'
>--> result will be empty as i have not created any tables
> 3)createing table using phoenix using sql as below:
> {code:java}
> connection = DriverManager.getConnection("jdbc:phoenix:localhost:12181");
> statement = connection.createStatement();
> statement.executeUpdate("create table *MYTEST.MYTAB1* (employee_id integer 
> not null primary key, name varchar)");
> connection.commit();
> {code}
> 4)hbase(main):059:0> list_namespace_tables 'MYTEST'
>--> still it is returning empty.
> Please suggest me the right syntax to create table in my own schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3653) CREATE VIEW documentation outdated on https://phoenix.apache.org/language/index.html

2017-06-18 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3653:
---
Attachment: PHOENIX-3653.diff

svn diff with the change to reflect the currently supported grammar for 
{{CREATE VIEW}}.

> CREATE VIEW documentation outdated on 
> https://phoenix.apache.org/language/index.html
> 
>
> Key: PHOENIX-3653
> URL: https://issues.apache.org/jira/browse/PHOENIX-3653
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Maryann Xue
>Assignee: Biju Nair
>Priority: Minor
>  Labels: doc
> Attachments: PHOENIX-3653.diff
>
>
> We allow PK constraints in CREATE VIEW statement, but it is not reflected on 
> our website's grammar reference: 
> https://phoenix.apache.org/language/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-3653) CREATE VIEW documentation outdated on https://phoenix.apache.org/language/index.html

2017-06-18 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-3653:
--

Assignee: Biju Nair

> CREATE VIEW documentation outdated on 
> https://phoenix.apache.org/language/index.html
> 
>
> Key: PHOENIX-3653
> URL: https://issues.apache.org/jira/browse/PHOENIX-3653
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Maryann Xue
>Assignee: Biju Nair
>Priority: Minor
>  Labels: doc
>
> We allow PK constraints in CREATE VIEW statement, but it is not reflected on 
> our website's grammar reference: 
> https://phoenix.apache.org/language/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3954) Message displayed during "Alter view" is not accurate

2017-06-18 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3954:
---
Attachment: PHOENIX-3954.patch

Attached patch to include {{ALTER VIEW}} in the error message.

> Message displayed during "Alter view" is not accurate 
> --
>
> Key: PHOENIX-3954
> URL: https://issues.apache.org/jira/browse/PHOENIX-3954
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: MacOS. Phoenix 4.8. Java 1.8
>Reporter: Ethan Wang
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-3954.patch
>
>
> When "Alter view" command failed, the error message refer view as "TABLE." 
> example below:
> ALTER VIEW V_T02 ADD k3 VARCHAR PRIMARY KEY, k2 VARCHAR PRIMARY KEY, v2 
> INTEGER;
> Error: ERROR 514 (42892): A duplicate column name was detected in the object 
> definition or ALTER TABLE statement. columnName=V_T02.K2 
> (state=42892,code=514)
> org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
> duplicate column name was detected in the object definition or ALTER TABLE 
> statement. columnName=V_T02.K2
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3557)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3124)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1342)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-3954) Message displayed during "Alter view" is not accurate

2017-06-18 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-3954:
--

Assignee: Biju Nair

> Message displayed during "Alter view" is not accurate 
> --
>
> Key: PHOENIX-3954
> URL: https://issues.apache.org/jira/browse/PHOENIX-3954
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: MacOS. Phoenix 4.8. Java 1.8
>Reporter: Ethan Wang
>Assignee: Biju Nair
>Priority: Minor
>
> When "Alter view" command failed, the error message refer view as "TABLE." 
> example below:
> ALTER VIEW V_T02 ADD k3 VARCHAR PRIMARY KEY, k2 VARCHAR PRIMARY KEY, v2 
> INTEGER;
> Error: ERROR 514 (42892): A duplicate column name was detected in the object 
> definition or ALTER TABLE statement. columnName=V_T02.K2 
> (state=42892,code=514)
> org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
> duplicate column name was detected in the object definition or ALTER TABLE 
> statement. columnName=V_T02.K2
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3557)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3124)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1342)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3933) Start row is skipped when iterating a result set with ScanUtil.setReversed(scan)

2017-06-16 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052594#comment-16052594
 ] 

Biju Nair commented on PHOENIX-3933:


Thanks [~giacomotaylor]. I was trying to make changes to the Phoenix code with 
the assumption that {{scan}} iteration will mimic the behavior in {{HBase}} and 
in hindsight the assumption is not correct.

> Start row is skipped when iterating a result set with 
> ScanUtil.setReversed(scan)
> 
>
> Key: PHOENIX-3933
> URL: https://issues.apache.org/jira/browse/PHOENIX-3933
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> {code}
> ResultSet rs = statement.executeQuery("SELECT * FROM " + tableName );
> QueryPlan plan = 
> statement.unwrap(PhoenixStatement.class).getQueryPlan();
> Scan scan = plan.getContext().getScan();
>  while(rs.next()) {
> LOG.debug(" "+rs.getInt(1));
> }
> {code}
> This section of the code returns
> {code}
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  0
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  1
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  2
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  3
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  4
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  5
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  6
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  7
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  8
> {code}
> If the {{scan}} is set to reverse the start and stop key is set to 4 &  8 the 
> resulting result set doesn't seem to include 8 in the result which is 
> different from the HBase scan when reversed.
> {code}
> ScanUtil.setReversed(scan);
> scan.setStartRow(PInteger.INSTANCE.toBytes(4));
> scan.setStopRow(PInteger.INSTANCE.toBytes(8));
> rs = new PhoenixResultSet(plan.iterator(), plan.getProjector(), 
> plan.getContext());
> while(rs.next()){
> LOG.debug("**rev*** "+rs.getInt(1));
> }
> {code}
> the result is
> {code}
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 7
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 6
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 5
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 4
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3933) Start row is skipped when iterating a result set with ScanUtil.setReversed(scan)

2017-06-09 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3933:
---
Priority: Minor  (was: Major)

> Start row is skipped when iterating a result set with 
> ScanUtil.setReversed(scan)
> 
>
> Key: PHOENIX-3933
> URL: https://issues.apache.org/jira/browse/PHOENIX-3933
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> {code}
> ResultSet rs = statement.executeQuery("SELECT * FROM " + tableName );
> QueryPlan plan = 
> statement.unwrap(PhoenixStatement.class).getQueryPlan();
> Scan scan = plan.getContext().getScan();
>  while(rs.next()) {
> LOG.debug(" "+rs.getInt(1));
> }
> {code}
> This section of the code returns
> {code}
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  0
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  1
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  2
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  3
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  4
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  5
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  6
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  7
> [main] org.apache.phoenix.end2end.ReverseScanTest(126):  8
> {code}
> If the {{scan}} is set to reverse the start and stop key is set to 4 &  8 the 
> resulting result set doesn't seem to include 8 in the result which is 
> different from the HBase scan when reversed.
> {code}
> ScanUtil.setReversed(scan);
> scan.setStartRow(PInteger.INSTANCE.toBytes(4));
> scan.setStopRow(PInteger.INSTANCE.toBytes(8));
> rs = new PhoenixResultSet(plan.iterator(), plan.getProjector(), 
> plan.getContext());
> while(rs.next()){
> LOG.debug("**rev*** "+rs.getInt(1));
> }
> {code}
> the result is
> {code}
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 7
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 6
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 5
> org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 4
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3933) Start row is skipped when iterating a result set with ScanUtil.setReversed(scan)

2017-06-09 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3933:
--

 Summary: Start row is skipped when iterating a result set with 
ScanUtil.setReversed(scan)
 Key: PHOENIX-3933
 URL: https://issues.apache.org/jira/browse/PHOENIX-3933
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair


{code}
ResultSet rs = statement.executeQuery("SELECT * FROM " + tableName );
QueryPlan plan = 
statement.unwrap(PhoenixStatement.class).getQueryPlan();
Scan scan = plan.getContext().getScan();
 while(rs.next()) {
LOG.debug(" "+rs.getInt(1));
}
{code}
This section of the code returns
{code}
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  0
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  1
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  2
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  3
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  4
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  5
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  6
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  7
[main] org.apache.phoenix.end2end.ReverseScanTest(126):  8
{code}
If the {{scan}} is set to reverse the start and stop key is set to 4 &  8 the 
resulting result set doesn't seem to include 8 in the result which is different 
from the HBase scan when reversed.
{code}
ScanUtil.setReversed(scan);
scan.setStartRow(PInteger.INSTANCE.toBytes(4));
scan.setStopRow(PInteger.INSTANCE.toBytes(8));
rs = new PhoenixResultSet(plan.iterator(), plan.getProjector(), 
plan.getContext());
while(rs.next()){
LOG.debug("**rev*** "+rs.getInt(1));
}
{code}
the result is
{code}
org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 7
org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 6
org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 5
org.apache.phoenix.end2end.ReverseScanTest(136): **rev*** 4
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-08 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042531#comment-16042531
 ] 

Biju Nair commented on PHOENIX-3917:


Thanks [~ankit.singhal], [~samarthjain] for the review.

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3921) ScanUtil#unsetReversed doesn't seem to unset reversal of Scan

2017-06-07 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042117#comment-16042117
 ] 

Biju Nair commented on PHOENIX-3921:


Let me know if the proposed change makes sense and I can provide a patch.

> ScanUtil#unsetReversed doesn't seem to unset reversal of Scan
> -
>
> Key: PHOENIX-3921
> URL: https://issues.apache.org/jira/browse/PHOENIX-3921
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> Created a new iterator with a {{scan}} object set to be non reversed using 
> {{ScanUtil.unsetReversed(scan)}}. But the iteration moves in the reverse 
> order. {{BaseResultIterators.java}} has the condition check
> {code}
> boolean isReverse = ScanUtil.isReversed(scan);
> {code}
> Looking at 
> [ScanUtil.java|https://github.com/apache/phoenix/blob/2cb617f352048179439d242d1165a9ffb39ad81c/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java#L609]
>  {{isReversed}} method is defined as
> {code}
> return scan.getAttribute(BaseScannerRegionObserver.REVERSE_SCAN) != null;
> {code}
> do we need to change the condition check to compare to 
> {{PDataType.TRUE_BYTES}}
> The current logic will return {{isReversed}} as {{true}} whether the 
> {{BaseScannerRegionObserver.REVERSE_SCAN}} attribute is set to 
> {{PDataType.TRUE_BYTES}} or {{PDataType.FALSE_BYTES}} which corresponds to 
> values set in {{setReversed}} and {{unsetReversed}} methods.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3921) ScanUtil#unsetReversed doesn't seem to unset reversal of Scan

2017-06-07 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3921:
--

 Summary: ScanUtil#unsetReversed doesn't seem to unset reversal of 
Scan
 Key: PHOENIX-3921
 URL: https://issues.apache.org/jira/browse/PHOENIX-3921
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair


Created a new iterator with a {{scan}} object set to be non reversed using 
{{ScanUtil.unsetReversed(scan)}}. But the iteration moves in the reverse order. 
{{BaseResultIterators.java}} has the condition check
{code}
boolean isReverse = ScanUtil.isReversed(scan);
{code}

Looking at 
[ScanUtil.java|https://github.com/apache/phoenix/blob/2cb617f352048179439d242d1165a9ffb39ad81c/phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java#L609]
 {{isReversed}} method is defined as
{code}
return scan.getAttribute(BaseScannerRegionObserver.REVERSE_SCAN) != null;
{code}
do we need to change the condition check to compare to {{PDataType.TRUE_BYTES}}
The current logic will return {{isReversed}} as {{true}} whether the 
{{BaseScannerRegionObserver.REVERSE_SCAN}} attribute is set to 
{{PDataType.TRUE_BYTES}} or {{PDataType.FALSE_BYTES}} which corresponds to 
values set in {{setReversed}} and {{unsetReversed}} methods.
 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-06 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16039785#comment-16039785
 ] 

Biju Nair commented on PHOENIX-3917:


Looking through the code for {{ProjectionCompiler#compile}} where the 
projection size is calculated, looks the [logic to calculate the 
size|https://github.com/apache/phoenix/blob/e7629ca39224e7cbc49e8a7740ed96877a16df76/phoenix-core/src/main/java/org/apache/phoenix/compile/ProjectionCompiler.java#L471-L491]
 need to be moved a bit further after
{code}
if (isWildcard) {
projectAllColumnFamilies(table, scan);
} else {
isProjectEmptyKeyValue = where == null || 
LiteralExpression.isTrue(where) || where.requiresFinalEvaluation();
for (byte[] family : projectedFamilies) {
projectColumnFamily(table, scan, family);
}
}
{code}

Also if the column projected in the query is only the key column, the size of 
the key column need to be set to the estimated size of the {{RowProjector}}. 
After making the changes the size returned are 4 and 72 respectively for the 
queries mentioned in the description of this issue where both the columns 
{{A_ID}} and {{A_DATA}} are of the {{int}}. If this fix is correct let me know 
and I can provide a patch.

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-06 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3917:
--

 Summary: RowProjector#getEstimatedRowByteSize() returns incorrect 
value
 Key: PHOENIX-3917
 URL: https://issues.apache.org/jira/browse/PHOENIX-3917
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair
Priority: Minor


{{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a query 
{{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the case for 
the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a non key 
column. Assuming that the method is meant to return the estimated number of 
bytes from the query projection the returned value of 0 is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3642) Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file

2017-05-31 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair resolved PHOENIX-3642.

Resolution: Not A Bug

Resolving since it is not an issue. Please re-open if required.

> Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file
> --
>
> Key: PHOENIX-3642
> URL: https://issues.apache.org/jira/browse/PHOENIX-3642
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: Ubuntu - 14.04, Kernel - 3.13.0-107-generic
>Reporter: Rahul Shrivastava
>Assignee: Biju Nair
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> How to reproduce
> 1. Download version with tag v4.9.0-HBase-0.98 for apache phoenix git
> 2. Start hdfs, hbase 
> 3. setup this property in bin/hbase-site.xml on phoenix server
>   
> phoenix.queryserver.http.port
> 8760
>   
> 4. bin/queryserver.py start
> 5.  sudo netstat -natp | grep -i 8760
> No result means that query server did not start at port 8760. Infact it 
> starts at default port of 8765. 
>  sudo netstat -natp | grep -i 8765
> tcp0  0 0.0.0.0:87650.0.0.0:*   LISTEN
>   84720/java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-31 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032381#comment-16032381
 ] 

Biju Nair edited comment on PHOENIX-3034 at 6/1/17 3:33 AM:


Resolving the issue since it is not a bug. Please re-open if required.


was (Author: gsbiju):
Resolving the issue since it is not a bug.

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-31 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair resolved PHOENIX-3034.

Resolution: Not A Bug

Resolving the issue since it is not a bug.

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-31 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3034:
---
Comment: was deleted

(was: Resolving the issue. Please re-open if required.)

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-31 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032378#comment-16032378
 ] 

Biju Nair commented on PHOENIX-3034:


Resolving the issue. Please re-open if required.

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-29 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16028644#comment-16028644
 ] 

Biju Nair commented on PHOENIX-3034:


@nishani, looks like this is a non issue. Can we resolve this ticket?

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3642) Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file

2017-05-29 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16028643#comment-16028643
 ] 

Biju Nair commented on PHOENIX-3642:


[~rahulshrivastava], looks like this is a non issue. Can we resolve this?

> Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file
> --
>
> Key: PHOENIX-3642
> URL: https://issues.apache.org/jira/browse/PHOENIX-3642
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: Ubuntu - 14.04, Kernel - 3.13.0-107-generic
>Reporter: Rahul Shrivastava
>Assignee: Biju Nair
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> How to reproduce
> 1. Download version with tag v4.9.0-HBase-0.98 for apache phoenix git
> 2. Start hdfs, hbase 
> 3. setup this property in bin/hbase-site.xml on phoenix server
>   
> phoenix.queryserver.http.port
> 8760
>   
> 4. bin/queryserver.py start
> 5.  sudo netstat -natp | grep -i 8760
> No result means that query server did not start at port 8760. Infact it 
> starts at default port of 8765. 
>  sudo netstat -natp | grep -i 8765
> tcp0  0 0.0.0.0:87650.0.0.0:*   LISTEN
>   84720/java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3866) COUNT(col) should scan along required column families only

2017-05-25 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024749#comment-16024749
 ] 

Biju Nair commented on PHOENIX-3866:


[~lhofhansl] Can you please share the table/index definitions too.

> COUNT(col) should scan along required column families only
> --
>
> Key: PHOENIX-3866
> URL: https://issues.apache.org/jira/browse/PHOENIX-3866
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Minor
>
> These two should be equivalent:
> {code}
> 0: jdbc:phoenix:localhost> select count(B.COLB) from test;
> ++
> | COUNT(B.COLB)  |
> ++
> | 10054  |
> ++
> 1 row selected (9.446 seconds)
> 0: jdbc:phoenix:localhost> select count(B.COLB) from test where B.COLB is not 
> null;
> ++
> | COUNT(B.COLB)  |
> ++
> | 10054  |
> ++
> 1 row selected (0.028 seconds)
> {code}
> Clearly the plain COUNT is doing unnecessary work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-25 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-3034:
--

Assignee: Biju Nair

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Assignee: Biju Nair
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3034) Passing zookeeper quorum host and port numbers as commandline arguments

2017-05-25 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024745#comment-16024745
 ] 

Biju Nair commented on PHOENIX-3034:


Assuming this ticket is for {{sqlline.py}}, as we can see from {{./sqlline.py 
--help}}, users can pass in ZooKeeper quorum.

> Passing zookeeper quorum host and port numbers as commandline arguments
> ---
>
> Key: PHOENIX-3034
> URL: https://issues.apache.org/jira/browse/PHOENIX-3034
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Nishani 
>Priority: Minor
>
> Currently the host and port are set to localhost:2181. They need to be 
> changed if user requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3642) Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file

2017-05-25 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024732#comment-16024732
 ] 

Biju Nair edited comment on PHOENIX-3642 at 5/25/17 1:34 PM:
-

{{QueryServer.py}} looks for {{hbase-site.xml}} in the following order 
1. {{$HBASE_HOME}}/conf
2. /etc/hbase/conf
3. current directory which in this case the {{bin}} directory

Looks like you may have {{hbase-site.xml}} in locations 1 or 2. If so, if you 
make the change there it should use the specified port.



was (Author: gsbiju):
{{QueryServer.py}} looks for {{hbase-site.xml}} in the following order 
1. $HBASE_HOME/conf
2. /etc/hbase/conf
3. current directory which in this case the {{bin}} directory

Looks like you may have {{hbase-site.xml}} in locations 1 or 2. If so, if you 
make the change there it should use the specified port.


> Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file
> --
>
> Key: PHOENIX-3642
> URL: https://issues.apache.org/jira/browse/PHOENIX-3642
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: Ubuntu - 14.04, Kernel - 3.13.0-107-generic
>Reporter: Rahul Shrivastava
>Assignee: Biju Nair
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> How to reproduce
> 1. Download version with tag v4.9.0-HBase-0.98 for apache phoenix git
> 2. Start hdfs, hbase 
> 3. setup this property in bin/hbase-site.xml on phoenix server
>   
> phoenix.queryserver.http.port
> 8760
>   
> 4. bin/queryserver.py start
> 5.  sudo netstat -natp | grep -i 8760
> No result means that query server did not start at port 8760. Infact it 
> starts at default port of 8765. 
>  sudo netstat -natp | grep -i 8765
> tcp0  0 0.0.0.0:87650.0.0.0:*   LISTEN
>   84720/java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3642) Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file

2017-05-25 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024732#comment-16024732
 ] 

Biju Nair commented on PHOENIX-3642:


{{QueryServer.py}} looks for {{hbase-site.xml}} in the following order 
1. $HBASE_HOME/conf
2. /etc/hbase/conf
3. current directory which in this case the {{bin}} directory

Looks like you may have {{hbase-site.xml}} in locations 1 or 2. If so, if you 
make the change there it should use the specified port.


> Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file
> --
>
> Key: PHOENIX-3642
> URL: https://issues.apache.org/jira/browse/PHOENIX-3642
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: Ubuntu - 14.04, Kernel - 3.13.0-107-generic
>Reporter: Rahul Shrivastava
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> How to reproduce
> 1. Download version with tag v4.9.0-HBase-0.98 for apache phoenix git
> 2. Start hdfs, hbase 
> 3. setup this property in bin/hbase-site.xml on phoenix server
>   
> phoenix.queryserver.http.port
> 8760
>   
> 4. bin/queryserver.py start
> 5.  sudo netstat -natp | grep -i 8760
> No result means that query server did not start at port 8760. Infact it 
> starts at default port of 8765. 
>  sudo netstat -natp | grep -i 8765
> tcp0  0 0.0.0.0:87650.0.0.0:*   LISTEN
>   84720/java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3642) Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file

2017-05-25 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-3642:
--

Assignee: Biju Nair

> Phoenix Query Server is not honoring port specified in bin/hbase-site.xml file
> --
>
> Key: PHOENIX-3642
> URL: https://issues.apache.org/jira/browse/PHOENIX-3642
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
> Environment: Ubuntu - 14.04, Kernel - 3.13.0-107-generic
>Reporter: Rahul Shrivastava
>Assignee: Biju Nair
>Priority: Minor
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> How to reproduce
> 1. Download version with tag v4.9.0-HBase-0.98 for apache phoenix git
> 2. Start hdfs, hbase 
> 3. setup this property in bin/hbase-site.xml on phoenix server
>   
> phoenix.queryserver.http.port
> 8760
>   
> 4. bin/queryserver.py start
> 5.  sudo netstat -natp | grep -i 8760
> No result means that query server did not start at port 8760. Infact it 
> starts at default port of 8765. 
>  sudo netstat -natp | grep -i 8765
> tcp0  0 0.0.0.0:87650.0.0.0:*   LISTEN
>   84720/java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3878) CursorName and CursorFetchPlan lack license headers

2017-05-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021731#comment-16021731
 ] 

Biju Nair commented on PHOENIX-3878:


[~gjacoby], thanks for pointing the missing the license headers. Did raise a PR 
against the github repo? Can you review and merge if the changes are fine?

> CursorName and CursorFetchPlan lack license headers
> ---
>
> Key: PHOENIX-3878
> URL: https://issues.apache.org/jira/browse/PHOENIX-3878
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Geoffrey Jacoby
>Assignee: Biju Nair
>Priority: Blocker
>
> CursorName and CursorFetchPlan were recently added as part of PHOENIX-3572, 
> but they appear to lack Apache license headers. This is causing HadoopQA to 
> fail for unrelated patch submissions because they fail the release audit. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-05-16 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3572:
---
Attachment: PHOENIX-3572.patch

Patch for the changes attached.

> Support FETCH NEXT| n ROWS from Cursor
> --
>
> Key: PHOENIX-3572
> URL: https://issues.apache.org/jira/browse/PHOENIX-3572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>Assignee: Biju Nair
> Attachments: PHOENIX-3572.patch
>
>
> Implement required changes to support 
> - {{DECLARE}} and {{OPEN}} a cursor
> - query {{FETCH NEXT | n ROWS}} from the cursor
> - {{CLOSE}} the cursor
> Based on the feedback in [PR 
> #192|https://github.com/apache/phoenix/pull/192], implement the changes using 
> {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-2606) Cursor support in Phoenix

2017-04-27 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15987246#comment-15987246
 ] 

Biju Nair commented on PHOENIX-2606:


Based on the feedback received the implementation has been modified. Please 
refer the sub-tasks for progress and details. 

> Cursor support in Phoenix
> -
>
> Key: PHOENIX-2606
> URL: https://issues.apache.org/jira/browse/PHOENIX-2606
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Sudarshan Kadambi
>Assignee: Biju Nair
>
> Phoenix should look to support a cursor model where the user could set the 
> fetch size to limit the number of rows that are fetched in each batch. Each 
> batch of result rows would be accompanied by a flag indicating if there are 
> more rows to be fetched for a given query or not. 
> The state management for the cursor could be done in the client side or 
> server side (i.e. HBase, not the Query Server). The client side state 
> management could involve capturing the last key in the batch and using that 
> as the start key for the subsequent scan operation. The downside of this 
> model is that if there were any intervening inserts or deletes in the result 
> set of the query, backtracking on the cursor would reflect these additional 
> rows (consider a page down, followed by a page up showing a different set of 
> result rows). Similarly, if the cursor is defined over the results of a join 
> or an aggregation, these operations would need to be performed again when the 
> next batch of result rows are to be fetched. 
> So an alternate approach could be to manage the state server side, wherein 
> there is a query context area in the Regionservers (or, maybe just a 
> temporary table) and the cursor results are fetched from there. This ensures 
> that the cursor has snapshot isolation semantics. I think both models make 
> sense but it might make sense to start with the state management completely 
> on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-2606) Cursor support in Phoenix

2017-04-27 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair reassigned PHOENIX-2606:
--

Assignee: Biju Nair

> Cursor support in Phoenix
> -
>
> Key: PHOENIX-2606
> URL: https://issues.apache.org/jira/browse/PHOENIX-2606
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Sudarshan Kadambi
>Assignee: Biju Nair
>
> Phoenix should look to support a cursor model where the user could set the 
> fetch size to limit the number of rows that are fetched in each batch. Each 
> batch of result rows would be accompanied by a flag indicating if there are 
> more rows to be fetched for a given query or not. 
> The state management for the cursor could be done in the client side or 
> server side (i.e. HBase, not the Query Server). The client side state 
> management could involve capturing the last key in the batch and using that 
> as the start key for the subsequent scan operation. The downside of this 
> model is that if there were any intervening inserts or deletes in the result 
> set of the query, backtracking on the cursor would reflect these additional 
> rows (consider a page down, followed by a page up showing a different set of 
> result rows). Similarly, if the cursor is defined over the results of a join 
> or an aggregation, these operations would need to be performed again when the 
> next batch of result rows are to be fetched. 
> So an alternate approach could be to manage the state server side, wherein 
> there is a query context area in the Regionservers (or, maybe just a 
> temporary table) and the cursor results are fetched from there. This ensures 
> that the cursor has snapshot isolation semantics. I think both models make 
> sense but it might make sense to start with the state management completely 
> on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3587) Support FETCH PREV|n ROWS from Cursor

2017-01-11 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-3587:
---
Summary: Support FETCH PREV|n ROWS from Cursor  (was: Support FETCH PREV|n 
ROWS from cursor)

> Support FETCH PREV|n ROWS from Cursor
> -
>
> Key: PHOENIX-3587
> URL: https://issues.apache.org/jira/browse/PHOENIX-3587
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>
> Implement required changes to support
> - query {{FETCH PREV | n ROWS}} from the cursor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3587) Support FETCH PREV|n ROWS from cursor

2017-01-11 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3587:
--

 Summary: Support FETCH PREV|n ROWS from cursor
 Key: PHOENIX-3587
 URL: https://issues.apache.org/jira/browse/PHOENIX-3587
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Biju Nair


Implement required changes to support
- query {{FETCH PREV | n ROWS}} from the cursor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-01-05 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3572:
--

 Summary: Support FETCH NEXT| n ROWS from Cursor
 Key: PHOENIX-3572
 URL: https://issues.apache.org/jira/browse/PHOENIX-3572
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Biju Nair


Implement required changes to support 
- {{DECLARE}} and {{OPEN}} a cursor
- query {{FETCH NEXT | n ROWS}} from the cursor
- {{CLOSE}} the cursor

Based on the feedback in [PR #192|https://github.com/apache/phoenix/pull/192], 
implement the changes using {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-3523) Secondary index on case sensitive table breaks all queries

2016-12-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15773034#comment-15773034
 ] 

Biju Nair edited comment on PHOENIX-3523 at 12/23/16 2:47 PM:
--

Thanks [~giacomotaylor]. So a request to create table {{A.B}} should fail if 
namespace "A" is not created prior to the table creation. no? I am referring to 
the following comment from [~an...@apache.org]

bq.we should not be creating "A.B" in namespace "A", instead it should go to 
default namespace with table name "A.B",


was (Author: gsbiju):
Thanks [~giacomotaylor]. So a request to create table {{A.B}} should fail if 
the namespace "A" is not created prior to the table creation. no? I referring 
to the following comment from [~an...@apache.org]

bq.we should not be creating "A.B" in namespace "A", instead it should go to 
default namespace with table name "A.B",

> Secondary index on case sensitive table breaks all queries
> --
>
> Key: PHOENIX-3523
> URL: https://issues.apache.org/jira/browse/PHOENIX-3523
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.8.2, 4.8.3
>Reporter: Karthick Duraisamy Soundararaj
>
> Phoenix creates the HBase table for case sensitive Phoenix table under 
> "default" namespace rather than creating it under the namespace that the 
> table belongs to. Please see the following for illustration of the problem.
> h1. Attempt to create/query secondary index on HBase
> 
> {panel:title=Step 1: Map an existing case sensitive table on HBase to Phoenix}
> On HBase, I have "m3:merchants". It is mapped to "m3.merchants" on phoenix. 
> As you can see below, I can query the table just fine.
> {code}
> 0: jdbc:phoenix:dev.snc1> drop index "merchant_feature_country_idx" on 
> "m3.merchants";
> No rows affected (4.006 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> +---+---+---+
> |primary|   merchant.name   | 
> merchant.feature_country  |
> +---+---+---+
> | 1860-00259060b612  | X | US|
> +---+---+---+
> {code}
> {panel}
> {panel:title=Step 2: Create a secondary index on case sensitive table}
> I created a secondary index on "m3.merchants" for "merchant.name". The moment 
> I do this, "m3.merchants" table is not usable anymore.
> {code}
> 0: jdbc:phoenix:dev.snc1> create index "merchant_feature_country_idx" ON 
> "m3.merchants"("merchant.name");
> 1,660,274 rows affected (36.341 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> Error: ERROR 1012 (42M03): Table undefined. 
> tableName=m3.merchant_feature_country_idx (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=m3.merchant_feature_country_idx
> at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:539)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:365)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:213)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:278)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1446)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> If you are wondering why this is happening, it's because the HBase table for 
> secondary index gets created under {{default}} namespace as against the 
> {{m3}} namespace like shown below
> {code}
> hbase(main):006:0> list_namespace_tables "default"
> TABLE

[jira] [Commented] (PHOENIX-3523) Secondary index on case sensitive table breaks all queries

2016-12-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15773034#comment-15773034
 ] 

Biju Nair commented on PHOENIX-3523:


Thanks [~giacomotaylor]. So a request to create table {{A.B}} should fail if 
the namespace "A" is not created prior to the table creation. no? I referring 
to the following comment from [~an...@apache.org]

bq.we should not be creating "A.B" in namespace "A", instead it should go to 
default namespace with table name "A.B",

> Secondary index on case sensitive table breaks all queries
> --
>
> Key: PHOENIX-3523
> URL: https://issues.apache.org/jira/browse/PHOENIX-3523
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.8.2, 4.8.3
>Reporter: Karthick Duraisamy Soundararaj
>
> Phoenix creates the HBase table for case sensitive Phoenix table under 
> "default" namespace rather than creating it under the namespace that the 
> table belongs to. Please see the following for illustration of the problem.
> h1. Attempt to create/query secondary index on HBase
> 
> {panel:title=Step 1: Map an existing case sensitive table on HBase to Phoenix}
> On HBase, I have "m3:merchants". It is mapped to "m3.merchants" on phoenix. 
> As you can see below, I can query the table just fine.
> {code}
> 0: jdbc:phoenix:dev.snc1> drop index "merchant_feature_country_idx" on 
> "m3.merchants";
> No rows affected (4.006 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> +---+---+---+
> |primary|   merchant.name   | 
> merchant.feature_country  |
> +---+---+---+
> | 1860-00259060b612  | X | US|
> +---+---+---+
> {code}
> {panel}
> {panel:title=Step 2: Create a secondary index on case sensitive table}
> I created a secondary index on "m3.merchants" for "merchant.name". The moment 
> I do this, "m3.merchants" table is not usable anymore.
> {code}
> 0: jdbc:phoenix:dev.snc1> create index "merchant_feature_country_idx" ON 
> "m3.merchants"("merchant.name");
> 1,660,274 rows affected (36.341 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> Error: ERROR 1012 (42M03): Table undefined. 
> tableName=m3.merchant_feature_country_idx (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=m3.merchant_feature_country_idx
> at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:539)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:365)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:213)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:278)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1446)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> If you are wondering why this is happening, it's because the HBase table for 
> secondary index gets created under {{default}} namespace as against the 
> {{m3}} namespace like shown below
> {code}
> hbase(main):006:0> list_namespace_tables "default"
> TABLE
> merchant_feature_country_idx
> 1 row(s) in 0.0100 seconds
> hbase(main):007:0> list_namespace_tables "m3"
> TABLE
> merchants
> 1 row(s) in 0.0080 seconds
> {code}
> {panel}
> h1. Attempt to force namespace into a namespace
> 
> I tried the following to force the hbase index table to be located under "m3" 
> namespace by doing the following 
> {code}
> create index 

[jira] [Commented] (PHOENIX-3523) Secondary index on case sensitive table breaks all queries

2016-12-22 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15771353#comment-15771353
 ] 

Biju Nair commented on PHOENIX-3523:


bq. we should not be creating "A.B" in namespace "A", instead it should go to 
default namespace with table name "A.B", I'll fix this. Hence other things will 
work as it is.

What will be the process for users to create tables under a specific namespace? 
It would help users using HBase namespace for multi-tenancy.

> Secondary index on case sensitive table breaks all queries
> --
>
> Key: PHOENIX-3523
> URL: https://issues.apache.org/jira/browse/PHOENIX-3523
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.8.2, 4.8.3
>Reporter: Karthick Duraisamy Soundararaj
>
> Phoenix creates the HBase table for case sensitive Phoenix table under 
> "default" namespace rather than creating it under the namespace that the 
> table belongs to. Please see the following for illustration of the problem.
> h1. Attempt to create/query secondary index on HBase
> 
> {panel:title=Step 1: Map an existing case sensitive table on HBase to Phoenix}
> On HBase, I have "m3:merchants". It is mapped to "m3.merchants" on phoenix. 
> As you can see below, I can query the table just fine.
> {code}
> 0: jdbc:phoenix:dev.snc1> drop index "merchant_feature_country_idx" on 
> "m3.merchants";
> No rows affected (4.006 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> +---+---+---+
> |primary|   merchant.name   | 
> merchant.feature_country  |
> +---+---+---+
> | 1860-00259060b612  | X | US|
> +---+---+---+
> {code}
> {panel}
> {panel:title=Step 2: Create a secondary index on case sensitive table}
> I created a secondary index on "m3.merchants" for "merchant.name". The moment 
> I do this, "m3.merchants" table is not usable anymore.
> {code}
> 0: jdbc:phoenix:dev.snc1> create index "merchant_feature_country_idx" ON 
> "m3.merchants"("merchant.name");
> 1,660,274 rows affected (36.341 seconds)
> 0: jdbc:phoenix:dev.snc1> select "primary", "merchant.name", 
> "merchant.feature_country" from "m3.merchants" limit 1;
> Error: ERROR 1012 (42M03): Table undefined. 
> tableName=m3.merchant_feature_country_idx (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=m3.merchant_feature_country_idx
> at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:539)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:365)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:213)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:94)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:80)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:278)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1446)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> {code}
> If you are wondering why this is happening, it's because the HBase table for 
> secondary index gets created under {{default}} namespace as against the 
> {{m3}} namespace like shown below
> {code}
> hbase(main):006:0> list_namespace_tables "default"
> TABLE
> merchant_feature_country_idx
> 1 row(s) in 0.0100 seconds
> hbase(main):007:0> list_namespace_tables "m3"
> TABLE
> merchants
> 1 row(s) in 0.0080 seconds
> {code}
> {panel}
> h1. Attempt to force namespace into a namespace
> 
> I tried the following to force the hbase index table to be located under "m3" 
> namespace by doing the following 
> {code}
> create index "m3.merchant_feature_country_idx" 

[jira] [Created] (PHOENIX-3483) FUNCTION and SEQUENCE SYSTEM table names use reserved words

2016-11-15 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-3483:
--

 Summary: FUNCTION and SEQUENCE SYSTEM table names use reserved 
words
 Key: PHOENIX-3483
 URL: https://issues.apache.org/jira/browse/PHOENIX-3483
 Project: Phoenix
  Issue Type: Improvement
Reporter: Biju Nair
Priority: Minor


Since FUNCTION and SEQUENCE are reserved words in Phoenix grammar it would be 
good to use non reserved words like FUNCTIONS and SEQUENCES. Not sure whether 
there are other reasons to choose the table names as it is now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-04 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15225166#comment-15225166
 ] 

Biju Nair commented on PHOENIX-2809:


{{alter table}} statement without {{if exists}} also has the same behavior.

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> drop table test_alter;
No rows affected (8.555 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint not 
null primary key);
No rows affected (1.267 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add TI tinyint, 
col1 varchar;
No rows affected (5.943 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> !describe test_alter
++--+-+--+++--++-+-+---+--+-+--+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  | COLUMN_NAME  | DATA_TYPE  | 
TYPE_NAME  | COLUMN_SIZE  | BUFFER_LENGTH  | DECIMAL_DIGITS  | NUM_PREC_RADIX  
| NULLABLE  | REMARKS  | COLUMN_DEF  | SQL_DATA_TYP |
++--+-+--+++--++-+-+---+--+-+--+
||  | TEST_ALTER  | TI   | -6 | TINYINT 
   | null | null   | null| null| 0  
   |  | | null |
||  | TEST_ALTER  | TI   | -6 | TINYINT 
   | null | null   | null| null| 1  
   |  | | null |
||  | TEST_ALTER  | COL1 | 12 | VARCHAR 
   | null | null   | null| null| 1  
   |  | | null |
++--+-+--+++--++-+-+---+--+-+--+
{noformat}

> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
> (1,2,'add');
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
> +-+-+---+
> | TI  | TI  | COL1  |
> +-+-+---+
> | 1   | 1   | add   |
> +-+-+---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-04 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15223922#comment-15223922
 ] 

Biju Nair edited comment on PHOENIX-2809 at 4/4/16 10:42 AM:
-

[~sergey.soldatov] agreed that the statement shouldn't fail, but at the 
sametime it should not create a duplicate column. no? In the example provided 
duplicate column {{T1}} got created.


was (Author: gsbiju):
[~sergey.soldatov] agreed that the statement shouldn't fail, but at the 
sometime it should not create a duplicate column. no? In the example provided 
duplicate column {{T1}} got created.

> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
> (1,2,'add');
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
> +-+-+---+
> | TI  | TI  | COL1  |
> +-+-+---+
> | 1   | 1   | add   |
> +-+-+---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-04 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15223922#comment-15223922
 ] 

Biju Nair commented on PHOENIX-2809:


[~sergey.soldatov] agreed that the statement shouldn't fail, but at the 
sometime it should not create a duplicate column. no? In the example provided 
duplicate column {{T1}} got created.

> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
> (1,2,'add');
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
> +-+-+---+
> | TI  | TI  | COL1  |
> +-+-+---+
> | 1   | 1   | add   |
> +-+-+---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2808) Can't upsert valid decimal values into unsigned datatype columns

2016-03-29 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-2808:
---
Description: 
Currently {{decimal}} values can't be {{upsert}} into columns defined as 
unsigned datatype while it is possible if the column is defined as signed 
datatype. The following is an example where the column is defined as 
{{unsigned_tinyint}} in table "unsigned_tinyint_test"  

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(0.0);
Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
(state=22005,code=203)
{noformat}

while it is signed in "tinyint_test"
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values (127.0);
1 row affected (5.168 seconds)
{noformat}

Looking at the 
[code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L220-L225],
 looks like this is applicable to columns defined as 
{{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}} 

  was:
Currently {{decimal}} values can't be {{upsert}} into columns defined as 
unsigned datatype while it is possible if the column is defined as signed 
datatype. The following is an example where the column is defined as 
{{unsigned_tinyint}} in table "unsigned_tinyint_test"  

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(0.0);
Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
(state=22005,code=203)
{noformat}

while it is signed in "tinyint_test"
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values (127.0);
1 row affected (5.168 seconds)
{noformat}

Looking at the 
[code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L298],
 looks like this is applicable to columns defined as 
{{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}} 


> Can't upsert valid decimal values into unsigned datatype columns
> 
>
> Key: PHOENIX-2808
> URL: https://issues.apache.org/jira/browse/PHOENIX-2808
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> Currently {{decimal}} values can't be {{upsert}} into columns defined as 
> unsigned datatype while it is possible if the column is defined as signed 
> datatype. The following is an example where the column is defined as 
> {{unsigned_tinyint}} in table "unsigned_tinyint_test"  
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test 
> values (0.0);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
> (state=22005,code=203)
> {noformat}
> while it is signed in "tinyint_test"
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values 
> (127.0);
> 1 row affected (5.168 seconds)
> {noformat}
> Looking at the 
> [code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L220-L225],
>  looks like this is applicable to columns defined as 
> {{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-03-29 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-2809:
--

 Summary: Alter table doesn't take into account current table 
definition
 Key: PHOENIX-2809
 URL: https://issues.apache.org/jira/browse/PHOENIX-2809
 Project: Phoenix
  Issue Type: Bug
Reporter: Biju Nair


{{Alter table}} to add a new column with the column definition as an existing 
column in the table succeeds while the expectation will be that the alter will 
fail. Following is an example.

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint not 
null primary key);
No rows affected (1.299 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not exists 
TI tinyint, col1 varchar;
No rows affected (15.962 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
(1,2,'add');
1 row affected (0.008 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
+-+-+---+
| TI  | TI  | COL1  |
+-+-+---+
| 1   | 1   | add   |
+-+-+---+
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2808) Can't upsert valid decimal values into unsigned datatype columns

2016-03-29 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-2808:
---
Description: 
Currently {{decimal}} values can't be {{upsert}} into columns defined as 
unsigned datatype while it is possible if the column is defined as signed 
datatype. The following is an example where the column is defined as 
{{unsigned_tinyint}} in table "unsigned_tinyint_test"  

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(0.0);
Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
(state=22005,code=203)
{noformat}

while it is signed in "tinyint_test"
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values (127.0);
1 row affected (5.168 seconds)
{noformat}

Looking at the 
[code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L298],
 looks like this is applicable to columns defined as 
{{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}} 

  was:
Currently {{decimal}} values can't be {{upset}} into columns defined as 
unsigned datatype while it is possible if the column is defined as signed 
datatype. The following is an example where the columns are defined as 
{{unsigned_tinyint}} in table "unsigned_tinyint_test"  

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(0.0);
Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
(state=22005,code=203)
{noformat}

while it is signed in "tinyint_test"
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values (127.0);
1 row affected (5.168 seconds)
{noformat}

Looking at the 
[code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L298],
 looks like this is applicable to columns defined as 
{{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}} 


> Can't upsert valid decimal values into unsigned datatype columns
> 
>
> Key: PHOENIX-2808
> URL: https://issues.apache.org/jira/browse/PHOENIX-2808
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> Currently {{decimal}} values can't be {{upsert}} into columns defined as 
> unsigned datatype while it is possible if the column is defined as signed 
> datatype. The following is an example where the column is defined as 
> {{unsigned_tinyint}} in table "unsigned_tinyint_test"  
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test 
> values (0.0);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and DECIMAL for 0.0 
> (state=22005,code=203)
> {noformat}
> while it is signed in "tinyint_test"
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into tinyint_test values 
> (127.0);
> 1 row affected (5.168 seconds)
> {noformat}
> Looking at the 
> [code|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java#L298],
>  looks like this is applicable to columns defined as 
> {{unsigned_int}},{{unsigned_long}},{{unsigned_small_int}},{{unsigned_tiny_int}}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1478) Can't upsert value of 127 into column of type unsigned tinyint

2016-03-29 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-1478:
---
Attachment: PHOENIX-1478-1.patch

Updated patch attached. Thanks [~jamestaylor] for pointing out other places 
with the same issue.

> Can't upsert value of 127 into column of type unsigned tinyint
> --
>
> Key: PHOENIX-1478
> URL: https://issues.apache.org/jira/browse/PHOENIX-1478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Carter Shanklin
>Priority: Minor
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1478-1.patch, PHOENIX-1478.patch
>
>
> The docs say values from 0 to 127 are valid. From sqlline I can upsert a 
> value of 126 but not 127. See below.
> {code}
> $ cat UnsignedTinyintFail.sql
> drop table if exists unsigned_tinyint_test;
> create table unsigned_tinyint_test (uti unsigned_tinyint primary key);
> upsert into unsigned_tinyint_test values (126);
> upsert into unsigned_tinyint_test values (127);
> {code}
> Results in:
> {code}
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> Setting property: [run, UnsignedTinyintFail.sql]
> issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecure
> 14/11/15 08:19:57 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> Connected to: Phoenix (version 4.2)
> Driver: PhoenixEmbeddedDriver (version 4.2)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 76/76 (100%) Done
> Done
> 1/4  drop table if exists unsigned_tinyint_test;
> No rows affected (0.015 seconds)
> 2/4  create table unsigned_tinyint_test (uti unsigned_tinyint primary 
> key);
> No rows affected (0.317 seconds)
> 3/4  upsert into unsigned_tinyint_test values (126);
> 1 row affected (0.032 seconds)
> 4/4  upsert into unsigned_tinyint_test values (127);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
>   at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:52)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:136)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:854)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:830)
>   at 
> org.apache.phoenix.parse.LiteralParseNode.accept(LiteralParseNode.java:73)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:721)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:458)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:259)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>   at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
>   at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>   at sqlline.SqlLine.dispatch(SqlLine.java:821)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1793)
>   at sqlline.SqlLine$Commands.run(SqlLine.java:4161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>   at sqlline.SqlLine.dispatch(SqlLine.java:817)
>   at sqlline.SqlLine.initArgs(SqlLine.java:657)
>   at sqlline.SqlLine.begin(SqlLine.java:680)
>   at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>   at sqlline.SqlLine.main(SqlLine.java:424)
> Aborting command set because "force" is false and 

[jira] [Commented] (PHOENIX-1478) Can't upsert value of 127 into column of type unsigned tinyint

2016-03-28 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215023#comment-15215023
 ] 

Biju Nair commented on PHOENIX-1478:


Some test results using the test table in the ticket discription

{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(127);
1 row affected (5.129 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> upsert into unsigned_tinyint_test values 
(128);
Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 128 
(state=22005,code=203)
{noformat}

> Can't upsert value of 127 into column of type unsigned tinyint
> --
>
> Key: PHOENIX-1478
> URL: https://issues.apache.org/jira/browse/PHOENIX-1478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Carter Shanklin
>Priority: Minor
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1478.patch
>
>
> The docs say values from 0 to 127 are valid. From sqlline I can upsert a 
> value of 126 but not 127. See below.
> {code}
> $ cat UnsignedTinyintFail.sql
> drop table if exists unsigned_tinyint_test;
> create table unsigned_tinyint_test (uti unsigned_tinyint primary key);
> upsert into unsigned_tinyint_test values (126);
> upsert into unsigned_tinyint_test values (127);
> {code}
> Results in:
> {code}
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> Setting property: [run, UnsignedTinyintFail.sql]
> issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecure
> 14/11/15 08:19:57 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> Connected to: Phoenix (version 4.2)
> Driver: PhoenixEmbeddedDriver (version 4.2)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 76/76 (100%) Done
> Done
> 1/4  drop table if exists unsigned_tinyint_test;
> No rows affected (0.015 seconds)
> 2/4  create table unsigned_tinyint_test (uti unsigned_tinyint primary 
> key);
> No rows affected (0.317 seconds)
> 3/4  upsert into unsigned_tinyint_test values (126);
> 1 row affected (0.032 seconds)
> 4/4  upsert into unsigned_tinyint_test values (127);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
>   at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:52)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:136)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:854)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:830)
>   at 
> org.apache.phoenix.parse.LiteralParseNode.accept(LiteralParseNode.java:73)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:721)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:458)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:259)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>   at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
>   at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>   at sqlline.SqlLine.dispatch(SqlLine.java:821)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1793)
>   at sqlline.SqlLine$Commands.run(SqlLine.java:4161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>   at sqlline.SqlLine.dispatch(SqlLine.java:817)

[jira] [Updated] (PHOENIX-1478) Can't upsert value of 127 into column of type unsigned tinyint

2016-03-28 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-1478:
---
Attachment: PHOENIX-1478.patch

[~cartershanklin], this is an issue based on my verification. Attached is the 
patch. Even though the fix is small, it would be good to categorize this issue 
as major because of the impact.

> Can't upsert value of 127 into column of type unsigned tinyint
> --
>
> Key: PHOENIX-1478
> URL: https://issues.apache.org/jira/browse/PHOENIX-1478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Carter Shanklin
>Priority: Minor
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1478.patch
>
>
> The docs say values from 0 to 127 are valid. From sqlline I can upsert a 
> value of 126 but not 127. See below.
> {code}
> $ cat UnsignedTinyintFail.sql
> drop table if exists unsigned_tinyint_test;
> create table unsigned_tinyint_test (uti unsigned_tinyint primary key);
> upsert into unsigned_tinyint_test values (126);
> upsert into unsigned_tinyint_test values (127);
> {code}
> Results in:
> {code}
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> Setting property: [run, UnsignedTinyintFail.sql]
> issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecure
> 14/11/15 08:19:57 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> Connected to: Phoenix (version 4.2)
> Driver: PhoenixEmbeddedDriver (version 4.2)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 76/76 (100%) Done
> Done
> 1/4  drop table if exists unsigned_tinyint_test;
> No rows affected (0.015 seconds)
> 2/4  create table unsigned_tinyint_test (uti unsigned_tinyint primary 
> key);
> No rows affected (0.317 seconds)
> 3/4  upsert into unsigned_tinyint_test values (126);
> 1 row affected (0.032 seconds)
> 4/4  upsert into unsigned_tinyint_test values (127);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
>   at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:52)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:136)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:854)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:830)
>   at 
> org.apache.phoenix.parse.LiteralParseNode.accept(LiteralParseNode.java:73)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:721)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:458)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:259)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>   at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
>   at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>   at sqlline.SqlLine.dispatch(SqlLine.java:821)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1793)
>   at sqlline.SqlLine$Commands.run(SqlLine.java:4161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>   at sqlline.SqlLine.dispatch(SqlLine.java:817)
>   at sqlline.SqlLine.initArgs(SqlLine.java:657)
>   at sqlline.SqlLine.begin(SqlLine.java:680)
>   at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>   at 

[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-27 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213752#comment-15213752
 ] 

Biju Nair commented on PHOENIX-2783:


Thanks [~jamestaylor], [~sergey.soldatov] for your comments. Based on the 
comments, attached is a new patch for your review. The following are the some 
sample exceptions generated for the three scenarios identified earlier.

1. Index create statement with duplicate columns
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> create table z (t1 varchar primary key, 
t2 varchar, t3 varchar);
No rows affected (1.445 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> create index idx on z (t2) include 
(t1,t3,t3);
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. In 
create index columnName=IDX.0.0:T3 (state=42702,code=502)
{noformat}

2.1 Table create statement with duplicate columns of different data types
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> create table tbl2 (i integer not null 
primary key, i integer, i varchar);
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. In 
create table columnName=TBL2.I (state=42702,code=502)
{noformat} 

2.2 Table create statement with duplicate columns of same data type
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> create table tbl1 (i integer not null 
primary key, i integer);
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. In 
create table columnName=TBL1.I (state=42702,code=502)
{noformat}

3 Duplicate columns in create view statement
{noformat}
0: jdbc:phoenix:localhost:2181:/hbase> create view z_view (t1_v varchar, t1_v 
varchar) as select * from z where t1 = 'TEST';
Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. In 
create view columnName=Z_VIEW.T1_V (state=42702,code=502)
{noformat}

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch, 
> PHOENIX-2783-3.patch, PHOENIX-2783-INIT.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-27 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-2783:
---
Attachment: PHOENIX-2783-3.patch

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch, 
> PHOENIX-2783-3.patch, PHOENIX-2783-INIT.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-24 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated PHOENIX-2783:
---
Attachment: PHOENIX-2783-INIT.patch

As mentioned in the previous comment, this is a quick change which can help 
with the conversation.

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch, 
> PHOENIX-2783-INIT.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-24 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211238#comment-15211238
 ] 

Biju Nair commented on PHOENIX-2783:


Hi [~sergey.soldatov], since the change is in the {{DDL}} code path and the 
number of columns in a index will not be large, it may be worth trading the 
small difference in performance identified in your experiment for a cleaner 
code using {{HashSet}}. But looking further into the 
[code|https://github.com/apache/phoenix/blob/dbc9ee9dfe9e168c45ad279f8478c59f0882240c/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java]
 I am not sure whether the proposed changed in the correct one due to two 
reasons.

1. 
[createIndex|https://github.com/apache/phoenix/blob/dbc9ee9dfe9e168c45ad279f8478c59f0882240c/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L1117]
 in turn gets converted into a call to 
[createTable|https://github.com/apache/phoenix/blob/dbc9ee9dfe9e168c45ad279f8478c59f0882240c/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L1320]
 which also includes logic similar to the one in the proposed fix to identify 
duplicate columns which is duplication of code/logic which can make future 
maintenance difficult.
2. There are other scenarios, where duplicate columns in DDLs generate 
undesired results i.e. the scope of this issue is not confined only to index 
creation. The following are two which I came across.
a) In table creation where two columns of same name but different types are 
used in the DDL, the DDL generates an error in {{sqlline.py}}, but the table is 
created and left in an unusable state similar to the index creation issue 
reported in this jura ticket.
{noformat}
0: jdbc:phoenix:vm1:2181:/hbase> create table tbl2 (i integer not null primary 
key, i integer, i varchar);
Error: ERROR 514 (42892): A duplicate column name was detected in the object 
definition or ALTER TABLE statement. columnName=TBL2.I (state=42892,code=514)
{noformat}
b) Again in table creation if the DDL has two columns with same name and type, 
the table creation goes through successfully and the table is usable. But the 
expectation of users from SQL DB background, it is not the expected behavior.
{noformat}
0: jdbc:phoenix:vm1:2181:/hbase> create table tbl1 (i integer not null primary 
key, i integer);
No rows affected (0.632 seconds)
0: jdbc:phoenix:vm1:2181:/hbase> upsert into tbl1 values (1, 2);
1 row affected (0.057 seconds)
0: jdbc:phoenix:vm1:2181:/hbase> select * from tbl1;
+++
| I  | I  |
+++
| 1  | 2 |
+++
1 row selected (0.084 seconds)
{noformat}
Since the issue impacts both the index and table creation, it may be better to 
have the duplication checking logic at the start of 
[createTableInternal|https://github.com/apache/phoenix/blob/dbc9ee9dfe9e168c45ad279f8478c59f0882240c/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L1502]
 method. I have done a quick change to test it and at least it handles these 
three scenarios. Will attach the change as a patch file which will help with 
the conversation. It will be good to know the feedback from project members 
like  [~jamestaylor] who has more knowledge about the history of the code on 
whether we are approaching it in the right direction.

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-24 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211063#comment-15211063
 ] 

Biju Nair edited comment on PHOENIX-2783 at 3/24/16 10:29 PM:
--

If anyone comes across the scenario detailed in the description of this issue 
i.e. table not being usable due to duplicate columns used while creating an 
index, the following may be way to recover. Please validate the process 
yourself before using it.

Through Phoenix {{sqlline.py}} delete relevant {{SYSTEM.CATALOG}} entries
{noformat}
delete from SYSTEM.CATALOG where table_name = 'IDX';

delete from SYSTEM.CATALOG where COLUMN_FAMILY = 'IDX' and TABLE_NAME = 'X';
{noformat}

Through base shell, delete the index created with duplicate columns

{noformat}
$ hbase shell

hbase(main):002:0> disable 'IDX'
0 row(s) in 1.6090 seconds

hbase(main):006:0> drop 'IDX'
0 row(s) in 0.3120 seconds
{noformat}


was (Author: gsbiju):
If anyone face the scenario detailed in the description of this issue i.e. 
table not being usable due to duplicate columns used while creating an index, 
the following may be way to recover. Please validate the process yourself 
before using it.

Through Phoenix {{sqlline.py}} delete relevant {{SYSTEM.CATALOG}} entries
{noformat}
delete from SYSTEM.CATALOG where table_name = 'IDX';

delete from SYSTEM.CATALOG where COLUMN_FAMILY = 'IDX' and TABLE_NAME = 'X';
{noformat}

Through base shell, delete the index created with duplicate columns

{noformat}
$ hbase shell

hbase(main):002:0> disable 'IDX'
0 row(s) in 1.6090 seconds

hbase(main):006:0> drop 'IDX'
0 row(s) in 0.3120 seconds
{noformat}

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-24 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211063#comment-15211063
 ] 

Biju Nair commented on PHOENIX-2783:


If anyone face the scenario detailed in the description of this issue i.e. 
table not being usable due to duplicate columns used while creating an index, 
the following may be way to recover. Please validate the process yourself 
before using it.

Through Phoenix {{sqlline.py}} delete relevant {{SYSTEM.CATALOG}} entries
{noformat}
delete from SYSTEM.CATALOG where table_name = 'IDX';

delete from SYSTEM.CATALOG where COLUMN_FAMILY = 'IDX' and TABLE_NAME = 'X';
{{noformat}}

Through base shell, delete the index created with duplicate columns

{{noformat}}
$ hbase shell

hbase(main):002:0> disable 'IDX'
0 row(s) in 1.6090 seconds

hbase(main):006:0> drop 'IDX'
0 row(s) in 0.3120 seconds
{{noformat}} 

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-24 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211063#comment-15211063
 ] 

Biju Nair edited comment on PHOENIX-2783 at 3/24/16 10:29 PM:
--

If anyone face the scenario detailed in the description of this issue i.e. 
table not being usable due to duplicate columns used while creating an index, 
the following may be way to recover. Please validate the process yourself 
before using it.

Through Phoenix {{sqlline.py}} delete relevant {{SYSTEM.CATALOG}} entries
{noformat}
delete from SYSTEM.CATALOG where table_name = 'IDX';

delete from SYSTEM.CATALOG where COLUMN_FAMILY = 'IDX' and TABLE_NAME = 'X';
{noformat}

Through base shell, delete the index created with duplicate columns

{noformat}
$ hbase shell

hbase(main):002:0> disable 'IDX'
0 row(s) in 1.6090 seconds

hbase(main):006:0> drop 'IDX'
0 row(s) in 0.3120 seconds
{noformat}


was (Author: gsbiju):
If anyone face the scenario detailed in the description of this issue i.e. 
table not being usable due to duplicate columns used while creating an index, 
the following may be way to recover. Please validate the process yourself 
before using it.

Through Phoenix {{sqlline.py}} delete relevant {{SYSTEM.CATALOG}} entries
{noformat}
delete from SYSTEM.CATALOG where table_name = 'IDX';

delete from SYSTEM.CATALOG where COLUMN_FAMILY = 'IDX' and TABLE_NAME = 'X';
{{noformat}}

Through base shell, delete the index created with duplicate columns

{{noformat}}
$ hbase shell

hbase(main):002:0> disable 'IDX'
0 row(s) in 1.6090 seconds

hbase(main):006:0> drop 'IDX'
0 row(s) in 0.3120 seconds
{{noformat}} 

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209236#comment-15209236
 ] 

Biju Nair commented on PHOENIX-2783:


Thanks [~sergey.soldatov] for sharing the experiment results. Just to confirm, 
the {{listMultimapTest}} was simulating the nested {{for loop}} as in the 
patch. Correct? Agree about string concatenation but thought of the code being 
a bit more cleaner. 

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15208775#comment-15208775
 ] 

Biju Nair commented on PHOENIX-2783:


Hi [~sergey.soldatov], will it make the code better if we use {{hash}} with 
{{familyName+columnName}} as the key with a dummy value of 0 so that the check 
for the duplicate for the column in the index definition statement can be a 
simple {{get}} and {{put}} if the key is not in the {{hash}}. Just a thought.

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2775) Phoenix tracing webapp runnable jar missing in the binary distribution

2016-03-15 Thread Biju Nair (JIRA)
Biju Nair created PHOENIX-2775:
--

 Summary: Phoenix tracing webapp runnable jar missing in the binary 
distribution
 Key: PHOENIX-2775
 URL: https://issues.apache.org/jira/browse/PHOENIX-2775
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Biju Nair
Priority: Minor


Phoenix tracing webapp runnable jar is not included in the binary distribution 
due to which bringing up the web app is failing to start. The failure is due to 
the pattern used in 
[phoenix_utils.py|https://github.com/apache/phoenix/blob/4.x-HBase-0.98/bin/phoenix_utils.py#L74]
 not finding the jar file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)