[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit off for deletes

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4900:
---
Attachment: PHOENIX-4900-4.x-HBase-1.4.patch

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit off for deletes
> 
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4900-4.x-HBase-1.4.patch, PHOENIX-4900.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5063) Create a new repo for the phoenix query server

2019-03-05 Thread Karan Mehta (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta resolved PHOENIX-5063.
--
Resolution: Fixed

Thanks [~tdsilva] and [~elserj] for the review. Your inputs were really helpful.

Major changes include:

Removal of phoenix-load-balancer, phoenix-queryserver and 
phoenix-queryserver-client modules from the main repo at 
[https://github.com/apache/phoenix] and moving them to 
[https://github.com/apache/phoenix-queryserver] (master branch).

The new repo also includes assembly plugin that will produce a tar ball which 
can be used for release and distribution. 

Relevant documentation will be updated as well soon.

> Create a new repo for the phoenix query server
> --
>
> Key: PHOENIX-5063
> URL: https://issues.apache.org/jira/browse/PHOENIX-5063
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5063.4.x-HBase-1.4.001.patch, 
> PHOENIX-5063.4.x-HBase-1.4.phoenix-queryserver.001.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Move phoenix-queryserver and phoenix-queryserver-client into a new repo that 
> can be compiled with Java 1.8. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5177) Update PQS documentation for PhoenixCanaryTool

2019-03-05 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5177:
---

 Summary: Update PQS documentation for PhoenixCanaryTool
 Key: PHOENIX-5177
 URL: https://issues.apache.org/jira/browse/PHOENIX-5177
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Add details about how to use the Canary Tool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-05 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Fix Version/s: 4.15.0

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5172.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit off for deletes

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4900:
---
Attachment: (was: PHOENIX-4900.patch)

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit off for deletes
> 
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4900.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit off for deletes

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4900:
---
Attachment: PHOENIX-4900.patch

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit off for deletes
> 
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4900.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit off for deletes

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4900:
---
Attachment: (was: PHOENIX-4900.patch)

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit off for deletes
> 
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4900.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusive

2019-03-05 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5176:

Priority: Blocker  (was: Major)

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusive

2019-03-05 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-5176:
-
Fix Version/s: (was: 4.14.2)
   (was: 5.1.0)

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Blocker
> Fix For: 4.15.0
>
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusive

2019-03-05 Thread Bin Shi (JIRA)
Bin Shi created PHOENIX-5176:


 Summary: KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns 
wrong result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive 
 Key: PHOENIX-5176
 URL: https://issues.apache.org/jira/browse/PHOENIX-5176
 Project: Phoenix
  Issue Type: Bug
Reporter: Bin Shi


In KeyRange.java, 
{color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
KeyRange rowKeyRange2) {{color}
{color:#262626}        int result = 
Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());{color}
{color:#262626}        if (result != 0) {{color}
{color:#262626}            return result;{color}
{color:#262626}        }{color}
{color:#262626}        result = 
Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
rowKeyRange2.getUpperRange());{color}
{color:#262626}        if (result != 0) {{color}
{color:#262626}            return result;{color}
{color:#262626}        }{color}
{color:#262626}        return 
Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
*rowKeyRange1*.isUpperInclusive());{color}
{color:#262626}    }{color}
{color:#262626} {color}
{color:#262626}The last line in yellow color should be "{color}return 
Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
*rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due to 
the bug I mentioned.
 
The KeyRange.compareUpperRange is only used in 
KeyRange.intersect(List rowKeyRanges1, List rowKeyRanges2). 
Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 5], [6, 7]}, the 
function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, but it seems that 
now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusive

2019-03-05 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5176:

Fix Version/s: 4.14.2
   5.1.0
   4.15.0

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusiv

2019-03-05 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi reassigned PHOENIX-5176:


Assignee: Bin Shi

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-03-05 Thread Jacob Isaac (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5122:
-
Attachment: PHOENIX-5122.patch

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.14.1, 4.14.2
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4929) IndexOutOfBoundsException when casting timestamp to date

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4929:
---
Attachment: PHOENIX-4929.patch

> IndexOutOfBoundsException when casting timestamp to date
> 
>
> Key: PHOENIX-4929
> URL: https://issues.apache.org/jira/browse/PHOENIX-4929
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Vincent Poon
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4929.patch, PHOENIX-4929.patch, 
> QueryCompilerTest.java
>
>
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>  at java.util.ArrayList.get(ArrayList.java:429)
>  at 
> org.apache.phoenix.expression.function.RoundTimestampExpression.create(RoundTimestampExpression.java:76)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.convertToRoundExpressionIfNeeded(ExpressionCompiler.java:594)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:621)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:1)
>  at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:62)
>  at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:564)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:510)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:195)
>  at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:155)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1745)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1738)
>  at  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5175) Separate client settings for disabling server side mutations for upserts and deletes

2019-03-05 Thread Abhishek Singh Chouhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated PHOENIX-5175:

Attachment: PHOENIX-5175-master-v2.patch

> Separate client settings for disabling server side mutations for upserts and 
> deletes
> 
>
> Key: PHOENIX-5175
> URL: https://issues.apache.org/jira/browse/PHOENIX-5175
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
> Attachments: PHOENIX-5175-master-v2.patch, PHOENIX-5175-master.patch
>
>
> PHOENIX-5026 added setting to disable server side mutations for upserts AND 
> deletes. This is an addition to that and provides separate knobs for upserts 
> and deletes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2019-03-05 Thread gabry (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gabry updated PHOENIX-5170:
---
Description: 
I have a flume client ,which inserting values to phoenix table with an index 
named idx_abc.
When the idx_abc dropped , flume logs WARN message for ever as flows 

28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
(org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
  - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to the 
index failed.  disableIndexOnFailure=true, Failed to write to multiple index 
tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
at 
org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
failed.  disableIndexOnFailure=true, Failed to write to multiple index tables: 
[PHOENIX:IDX_ABC]
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
... 21 more
Caused by: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[PHOENIX:IDX_ABC]
at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
... 20 more
 on bigdata.om,60020,1551245714859, tracking started Thu Feb 28 10:25:55 CST 
2019; not retrying 6 - final failure
28 Feb 2019 10:25:55,774 INFO  [SinkRunner-PollingRunner-DefaultSinkProcessor] 
(org.apache.phoenix.index.PhoenixIndexFailurePolicy.updateIndex:502)  - 
Disabling index after hitting max number of index write retries: PHOENIX:IDX_ABC
28 Feb 2019 10:25:55,776 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] 
(org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleExceptionFromClient:421)
  - Error while trying to handle index write exception
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[PHOENIX:IDX_ABC]
at 

[jira] [Updated] (PHOENIX-4345) Error message for incorrect index is not accurate

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-4345:
---
Attachment: PHOENIX-4345-4.x-HBase-1.4.patch

> Error message for incorrect index is not accurate
> -
>
> Key: PHOENIX-4345
> URL: https://issues.apache.org/jira/browse/PHOENIX-4345
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-4345-4.x-HBase-1.4.patch, PHOENIX-4345.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Error message for incorrect index is not accurate. it shows "table 
> undefined". rather, should be index undefined.
> Table name: PERSON
> Index name: LOCAL_ADDRESS
> 0: jdbc:phoenix:localhost:2181:/hbase> ALTER INDEX LOCAL_ADDRESSX ON PERSON 
> rebuild;
> Error: ERROR 1012 (42M03): Table undefined. tableName=LOCAL_ADDRESSX 
> (state=42M03,code=1012)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4929) IndexOutOfBoundsException when casting timestamp to date

2019-03-05 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan reassigned PHOENIX-4929:
--

 Assignee: Xinyi Yan
Affects Version/s: (was: 4.14.0)
   Attachment: PHOENIX-4929.patch

> IndexOutOfBoundsException when casting timestamp to date
> 
>
> Key: PHOENIX-4929
> URL: https://issues.apache.org/jira/browse/PHOENIX-4929
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Vincent Poon
>Assignee: Xinyi Yan
>Priority: Major
> Attachments: PHOENIX-4929.patch, QueryCompilerTest.java
>
>
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>  at java.util.ArrayList.get(ArrayList.java:429)
>  at 
> org.apache.phoenix.expression.function.RoundTimestampExpression.create(RoundTimestampExpression.java:76)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.convertToRoundExpressionIfNeeded(ExpressionCompiler.java:594)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:621)
>  at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:1)
>  at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:62)
>  at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:564)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:510)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:195)
>  at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:155)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1745)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.compileQuery(PhoenixStatement.java:1738)
>  at  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2019-03-05 Thread gabry (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gabry updated PHOENIX-5170:
---
 Flags: Patch,Important
External issue URL: https://issues.apache.org/jira/browse/PHOENIX-5170

> Update meta timestamp of parent table when dropping index
> -
>
> Key: PHOENIX-5170
> URL: https://issues.apache.org/jira/browse/PHOENIX-5170
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: gabry
>Priority: Major
>  Labels: phoenix
> Fix For: 5.1.0
>
> Attachments: updateParentTableMetaWhenDroppingIndex.patch
>
>
> I have a flume client ,which inserting values to phoenix table with an index 
> named idx_abc.
> When the idx_abc dropped , flume logs WARN message for ever as flows 
> 28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
> (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
>   - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed.  disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
> at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed.  disableIndexOnFailure=true, Failed to write to multiple index 
> tables: [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
> ... 21 more
> Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> 

[jira] [Updated] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2019-03-05 Thread gabry (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gabry updated PHOENIX-5170:
---
Comment: was deleted

(was: Any body here:()

> Update meta timestamp of parent table when dropping index
> -
>
> Key: PHOENIX-5170
> URL: https://issues.apache.org/jira/browse/PHOENIX-5170
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: gabry
>Priority: Major
>  Labels: phoenix
> Fix For: 5.1.0
>
> Attachments: updateParentTableMetaWhenDroppingIndex.patch
>
>
> I have a flume client ,which inserting values to phoenix table with an index 
> named idx_abc.
> When the idx_abc dropped , flume logs WARN message for ever as flows 
> 28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
> (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
>   - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed.  disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
> at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed.  disableIndexOnFailure=true, Failed to write to multiple index 
> tables: [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
> ... 21 more
> Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> ... 20 more
>  on bigdata-125.2345.com,60020,1551245714859, 

[jira] [Updated] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2019-03-05 Thread gabry (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gabry updated PHOENIX-5170:
---
   Attachment: updateParentTableMetaWhenDroppingIndex.patch
Fix Version/s: 5.1.0
  Description: 
I have a flume client ,which inserting values to phoenix table with an index 
named idx_abc.
When the idx_abc dropped , flume logs WARN message for ever as flows 

28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
(org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
  - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to the 
index failed.  disableIndexOnFailure=true, Failed to write to multiple index 
tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
at 
org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
failed.  disableIndexOnFailure=true, Failed to write to multiple index tables: 
[PHOENIX:IDX_ABC]
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
... 21 more
Caused by: 
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[PHOENIX:IDX_ABC]
at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
... 20 more
 on bigdata-125.2345.com,60020,1551245714859, tracking started Thu Feb 28 
10:25:55 CST 2019; not retrying 6 - final failure
28 Feb 2019 10:25:55,774 INFO  [SinkRunner-PollingRunner-DefaultSinkProcessor] 
(org.apache.phoenix.index.PhoenixIndexFailurePolicy.updateIndex:502)  - 
Disabling index after hitting max number of index write retries: PHOENIX:IDX_ABC
28 Feb 2019 10:25:55,776 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] 
(org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleExceptionFromClient:421)
  - Error while trying to handle index write exception
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException: