[jira] [Updated] (PHOENIX-4721) Adding PK column to a table with multiple secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4721:
--
Summary: Adding PK column to a table with multiple secondary indexes fails  
(was: Issuing ALTER TABLE to add a PK Column to a table with multiple secondary 
indexes fails)

> Adding PK column to a table with multiple secondary indexes fails
> -
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with multiple secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4721:
--
Summary: Issuing ALTER TABLE to add a PK Column to a table with multiple 
secondary indexes fails  (was: Issuing ALTER TABLE to add a PK Column to a 
table with secondary indexes fails)

> Issuing ALTER TABLE to add a PK Column to a table with multiple secondary 
> indexes fails
> ---
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459467#comment-16459467
 ] 

James Taylor edited comment on PHOENIX-4721 at 5/1/18 5:52 AM:
---

Please review, [~tdsilva]. This issue is due to two indexes being added - we 
should only increment the pk slot position after processing all indexes. Thanks 
for the test - that made things easy, [~jfernando_sfdc].


was (Author: jamestaylor):
Please review, [~tdsilva]. This issue is due to two indexes being added - we 
should only increment the pk slot position after processing all indexes.

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4721:
-

Assignee: James Taylor

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459467#comment-16459467
 ] 

James Taylor commented on PHOENIX-4721:
---

Please review, [~tdsilva]. This issue is due to two indexes being added - we 
should only increment the pk slot position after processing all indexes.

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4721:
--
Fix Version/s: 5.0.0
   4.14.0

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4721:
--
Attachment: PHOENIX-4721_v1.patch

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Priority: Major
> Attachments: AlterTableExtendPk.java, PHOENIX-4721_v1.patch
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-04-30 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459452#comment-16459452
 ] 

Thomas D'Silva commented on PHOENIX-3534:
-

I have created a doc with the major work that is remaining and would appreciate 
any feedback. 

https://docs.google.com/document/d/1g39s-9JfTZUtW5TpUjuh9HJhPrIoING1GFHKg2HYcYM/edit?usp=sharing

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-3534-wip.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459449#comment-16459449
 ] 

James Taylor commented on PHOENIX-4718:
---

Thanks, [~tdsilva]. I attached a cleaned up v3 patch that:
 * Makes the size increase configurable
 * Uses separate ServerAggregators classes for the tracking versus non tracking 
case

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch, PHOENIX-4718_v3.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4718:
--
Fix Version/s: 5.0.0
   4.14.0

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch, PHOENIX-4718_v3.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4718:
--
Attachment: PHOENIX-4718_v3.patch

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch, PHOENIX-4718_v3.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459431#comment-16459431
 ] 

Thomas D'Silva commented on PHOENIX-4718:
-

LGTM

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459410#comment-16459410
 ] 

James Taylor commented on PHOENIX-4719:
---

+1. Sure, I can push to all branches. Thanks for figuring this out, [~pboado]!

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread Jan Fernando (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Fernando updated PHOENIX-4721:
--
Attachment: AlterTableExtendPk.java

> Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails
> --
>
> Key: PHOENIX-4721
> URL: https://issues.apache.org/jira/browse/PHOENIX-4721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jan Fernando
>Priority: Major
> Attachments: AlterTableExtendPk.java
>
>
> The expected behavior when adding a PK column to table is that the column 
> will successfully be added, even if the table has secondary indexes.
> For example:
> {code:java}
> ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
> {code}
> should execute successfully even if the table has secondary indexes defined.
> However issuing the above ALTER statement on a table with secondary indexes 
> throws the following Exception:
> {code:java}
> java.util.NoSuchElementException
> at java.util.ArrayList$Itr.next(ArrayList.java:854)
> at 
> org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
> at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
> {code}
> See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4721) Issuing ALTER TABLE to add a PK Column to a table with secondary indexes fails

2018-04-30 Thread Jan Fernando (JIRA)
Jan Fernando created PHOENIX-4721:
-

 Summary: Issuing ALTER TABLE to add a PK Column to a table with 
secondary indexes fails
 Key: PHOENIX-4721
 URL: https://issues.apache.org/jira/browse/PHOENIX-4721
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.0
Reporter: Jan Fernando


The expected behavior when adding a PK column to table is that the column will 
successfully be added, even if the table has secondary indexes.

For example:
{code:java}
ALTER TABLE TEST.ACTIVITY ADD SOURCE VARCHAR(25) NULL PRIMARY KEY
{code}
should execute successfully even if the table has secondary indexes defined.

However issuing the above ALTER statement on a table with secondary indexes 
throws the following Exception:
{code:java}
java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:854)
at 
org.apache.phoenix.schema.RowKeyValueAccessor.(RowKeyValueAccessor.java:78)
at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3452)
at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3120)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1328)
at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:393)
at org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:269)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
{code}
See attached file for a detailed repro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4720) SequenceIT is flapping

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4720:
--
Fix Version/s: 5.0.0
   4.14.0

> SequenceIT is flapping
> --
>
> Key: PHOENIX-4720
> URL: https://issues.apache.org/jira/browse/PHOENIX-4720
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4720.patch
>
>
> SequenceIT.testSequenceDefault() flaps if the drop/create of the same 
> sequence occurs at the same millisecond. Simple solution is to use unique 
> names for the different sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4720) SequenceIT is flapping

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4720:
-

Assignee: James Taylor

> SequenceIT is flapping
> --
>
> Key: PHOENIX-4720
> URL: https://issues.apache.org/jira/browse/PHOENIX-4720
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4720.patch
>
>
> SequenceIT.testSequenceDefault() flaps if the drop/create of the same 
> sequence occurs at the same millisecond. Simple solution is to use unique 
> names for the different sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4720) SequenceIT is flapping

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4720:
--
Attachment: PHOENIX-4720.patch

> SequenceIT is flapping
> --
>
> Key: PHOENIX-4720
> URL: https://issues.apache.org/jira/browse/PHOENIX-4720
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4720.patch
>
>
> SequenceIT.testSequenceDefault() flaps if the drop/create of the same 
> sequence occurs at the same millisecond. Simple solution is to use unique 
> names for the different sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4720) SequenceIT is flapping

2018-04-30 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4720:
-

 Summary: SequenceIT is flapping
 Key: PHOENIX-4720
 URL: https://issues.apache.org/jira/browse/PHOENIX-4720
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor


SequenceIT.testSequenceDefault() flaps if the drop/create of the same sequence 
occurs at the same millisecond. Simple solution is to use unique names for the 
different sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459319#comment-16459319
 ] 

Pedro Boado commented on PHOENIX-4719:
--

Patch attached. [~jamestaylor] can you review ? 
The issue was detected in one of cdh branches but it could have happened with 
any other HBase version. Would you push the change to all branches?

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: PHOENIX-4719.patch

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: (was: phoenix.iml)

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: phoenix.iml

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: PHOENIX-4719.patch, dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4719:
-
Attachment: dump-rs.log

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459313#comment-16459313
 ] 

Pedro Boado commented on PHOENIX-4719:
--






RS reaches a static initialization deadlock between  
org.apache.phoenix.exception.SQLExceptionCode  and 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.

SQLException:246  uses a static member of PhoenixDatabaseMetadata . And  
PhoenixDatabaseMetadata:93  ( static field ) ends up accesing a static field 
from SQLException  when building  TableProperty:237

In the process this ends up also blocking ServerUtil:73 and indirectly 
DelegateRegionCoprocessorEnvironment:50 .
 [^dump-rs.log] 

> Avoid static initialization deadlock while loading regions
> --
>
> Key: PHOENIX-4719
> URL: https://issues.apache.org/jira/browse/PHOENIX-4719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
> Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Attachments: dump-rs.log
>
>
> HBase cluster initialization appears to fail as RS is not able to serve all 
> table regions. 
> Almost all table regions are stuck in transition waiting for the first three 
> regions to be opened. After a while the process times out and RS fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4719) Avoid static initialization deadlock while loading regions

2018-04-30 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4719:


 Summary: Avoid static initialization deadlock while loading regions
 Key: PHOENIX-4719
 URL: https://issues.apache.org/jira/browse/PHOENIX-4719
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0, 5.0.0
 Environment: Detected in 4.14-cdh5.14 running in CentOS 6.7 and JDK 7
Reporter: Pedro Boado
Assignee: Pedro Boado


HBase cluster initialization appears to fail as RS is not able to serve all 
table regions. 

Almost all table regions are stuck in transition waiting for the first three 
regions to be opened. After a while the process times out and RS fails.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4674) Incorrect stats if data size is less than guidepost width

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4674.
---
Resolution: Not A Problem

> Incorrect stats if data size is less than guidepost width
> -
>
> Key: PHOENIX-4674
> URL: https://issues.apache.org/jira/browse/PHOENIX-4674
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Attachments: PHOENIX-4674.patch
>
>
> For a small table, lets say with a single region < guidepost width, the stats 
> after running update statistics can be way off. This is because we get an 
> empty guidepost for the region and in BaseResultIterators we end up 
> estimating the #rows as guidepostwidth/estimated row size of the table. For a 
> table having <100 rows and guidepost width size of 100 mb, if the estimated 
> row size is 100 bytes we end up estimating a million rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4656) Disable GC during index population

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4656.
---
Resolution: Not A Problem

Closing based on Ohad's comment here: 
https://issues.apache.org/jira/browse/PHOENIX-4484?focusedCommentId=16458377=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16458377

> Disable GC during index population
> --
>
> Key: PHOENIX-4656
> URL: https://issues.apache.org/jira/browse/PHOENIX-4656
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>Priority: Major
>
> Follow the discussion in PHOENIX-3860. We need to augment the TAL with an 
> option to disable and enable the garbage collection. This is needed to 
> guarantee that all data populated to the index during index population. Not 
> disabling the GC can result in data loss due to the GC removing entries 
> bellow low water mark.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459261#comment-16459261
 ] 

Mujtaba Chohan commented on PHOENIX-4718:
-

I'll check that [~jamestaylor]

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4718:
--
Attachment: PHOENIX-4718_v2.patch

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch, 
> PHOENIX-4718_v2.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459255#comment-16459255
 ] 

James Taylor commented on PHOENIX-4718:
---

Here's the 0.98 version, [~mujtabachohan]

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4718:
--
Attachment: PHOENIX-4718-4.x-HBase-0.98.patch

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718-4.x-HBase-0.98.patch, PHOENIX-4718.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459211#comment-16459211
 ] 

James Taylor edited comment on PHOENIX-4718 at 4/30/18 10:46 PM:
-

Please review, [~tdsilva]. Instead of always tracking memory used by an 
Aggregator, we only do it if we're using an Aggregator in which trackSize() 
returns true. [~mujtabachohan] - can you try with this patch and see if perf is 
better again? Let me attach an 0.98 version of this patch, though.


was (Author: jamestaylor):
Please review, [~tdsilva]. Instead of always tracking memory used by an 
Aggregator, we only do it if we're using an Aggregator in which trackSize() 
returns true. [~mujtabachohan] - can you try with this patch and see if perf is 
better again?

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459211#comment-16459211
 ] 

James Taylor commented on PHOENIX-4718:
---

Please review, [~tdsilva]. Instead of always tracking memory used by an 
Aggregator, we only do it if we're using an Aggregator in which trackSize() 
returns true. [~mujtabachohan] - can you try with this patch and see if perf is 
better again?

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4718:
--
Attachment: PHOENIX-4718.patch

> Decrease overhead of tracking aggregate heap size
> -
>
> Key: PHOENIX-4718
> URL: https://issues.apache.org/jira/browse/PHOENIX-4718
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4718.patch
>
>
> Since PHOENIX-4148, we track the heap size while aggregation is occurring. 
> This decreased performance of aggregation by ~20%. We really only need to 
> track this for the DistinctValueWithCountServerAggregator (used by DISTINCT 
> COUNT, DISTINCT, PERCENTILE functions, and STDDEV functions). By 
> conditionally tracking, we should be able to bring perf back to what it was 
> before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459159#comment-16459159
 ] 

Hudson commented on PHOENIX-4716:
-

ABORTED: Integrated in Jenkins build PreCommit-PHOENIX-Build #1846 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1846/])
PHOENIX-4716 ParameterizedTransactionIT is failing in 0.98 branch (jtaylor: rev 
d10151e33615cacd01e634fef896e8644fced890)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/ParameterizedTransactionIT.java


> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4716.patch
>
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4705) Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()

2018-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459154#comment-16459154
 ] 

Hudson commented on PHOENIX-4705:
-

ABORTED: Integrated in Jenkins build PreCommit-PHOENIX-Build #1846 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1846/])
PHOENIX-4705 Use XMLInputFactory.newInstance() instead of (jtaylor: rev 
6c1a624f351926c6b122c722cf4f9c3418a222ae)
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/XMLConfigParser.java
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/XMLResultHandler.java


> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()
> -
>
> Key: PHOENIX-4705
> URL: https://issues.apache.org/jira/browse/PHOENIX-4705
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4705.patch
>
>
> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory() in 
> Pherf as the latter doesn't compile (at least for me).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4711) Unable to set property on table with VARBINARY as last column

2018-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459156#comment-16459156
 ] 

Hudson commented on PHOENIX-4711:
-

ABORTED: Integrated in Jenkins build PreCommit-PHOENIX-Build #1846 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1846/])
PHOENIX-4711 Unable to set property on table with VARBINARY as last (jtaylor: 
rev a18fd1e133f5a2785e0ade236d72028b3d6854da)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Unable to set property on table with VARBINARY as last column
> -
>
> Key: PHOENIX-4711
> URL: https://issues.apache.org/jira/browse/PHOENIX-4711
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4711_v1.patch
>
>
> Our check for preventing the addition of a column kicks in even when you're 
> not adding a column, but instead are trying to set a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459158#comment-16459158
 ] 

Hudson commented on PHOENIX-4709:
-

ABORTED: Integrated in Jenkins build PreCommit-PHOENIX-Build #1846 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1846/])
PHOENIX-4709 Alter split policy in upgrade path for system tables (jtaylor: rev 
fc194c568ee5a4a7ba0e92b64ae24f7cf4b224e5)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java


> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4710) Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG

2018-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459155#comment-16459155
 ] 

Hudson commented on PHOENIX-4710:
-

ABORTED: Integrated in Jenkins build PreCommit-PHOENIX-Build #1846 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1846/])
PHOENIX-4710 Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG (jtaylor: 
rev 49c02328a0627939838b51b4440abe9427244820)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java


> Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG
> ---
>
> Key: PHOENIX-4710
> URL: https://issues.apache.org/jira/browse/PHOENIX-4710
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4710.patch
>
>
> We shouldn't be setting KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG since 
> the table is immutable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4718) Decrease overhead of tracking aggregate heap size

2018-04-30 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4718:
-

 Summary: Decrease overhead of tracking aggregate heap size
 Key: PHOENIX-4718
 URL: https://issues.apache.org/jira/browse/PHOENIX-4718
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor


Since PHOENIX-4148, we track the heap size while aggregation is occurring. This 
decreased performance of aggregation by ~20%. We really only need to track this 
for the DistinctValueWithCountServerAggregator (used by DISTINCT COUNT, 
DISTINCT, PERCENTILE functions, and STDDEV functions). By conditionally 
tracking, we should be able to bring perf back to what it was before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4717) Document when a primary column is allowed to be added

2018-04-30 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4717:
---

 Summary: Document when a primary column is allowed to be added
 Key: PHOENIX-4717
 URL: https://issues.apache.org/jira/browse/PHOENIX-4717
 Project: Phoenix
  Issue Type: Task
Reporter: Thomas D'Silva


We disallow adding a column to the PK if the last PK column is VARBINARY or if 
the last PK column is fixed width and nullable. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16458954#comment-16458954
 ] 

Ankit Singhal commented on PHOENIX-4685:


Thanks [~rajeshbabu] for the changes,  
In order to just make it more intuitive and avoiding improper usage, we may 
need to have a separate ConnectionFactory which manages all the connections 
(which will be eventually called by TableFactory)  

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685.patch, PHOENIX-4685_jstack, 
> PHOENIX-4685_v2.patch, PHOENIX-4685_v3.patch
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-30 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Attachment: PHOENIX-4685_v3.patch

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685.patch, PHOENIX-4685_jstack, 
> PHOENIX-4685_v2.patch, PHOENIX-4685_v3.patch
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
>  ... 25 more
> Caused by: java.lang.Exception: 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-30 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Attachment: (was: PHOENIX-4576_v2.patch)

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685.patch, PHOENIX-4685_jstack, 
> PHOENIX-4685_v2.patch
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
>  ... 25 more
> Caused by: java.lang.Exception: 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-30 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Attachment: PHOENIX-4576_v2.patch

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685.patch, PHOENIX-4685_jstack, 
> PHOENIX-4685_v2.patch
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
>  ... 25 more
> Caused by: java.lang.Exception: java.lang.OutOfMemoryError: 

[jira] [Commented] (PHOENIX-4715) PartialIndexRebuilderIT tests fail after switching master to HBase 1.4

2018-04-30 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16458653#comment-16458653
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4715:
--

Will check [~jamestaylor].

> PartialIndexRebuilderIT tests fail after switching master to HBase 1.4
> --
>
> Key: PHOENIX-4715
> URL: https://issues.apache.org/jira/browse/PHOENIX-4715
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Priority: Major
>
> I think the 3 test failures in PartialIndexRebuilderIT started happening 
> after we switched master to HBase 1.4  as part of PHOENIX-4076. 
> Maybe [~lhofhansl] or [~apurtell] might have some insight
> {code:java}
> [ERROR] Failures: 
> [ERROR] PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild:230 Expected 
> equality for V1, but null!=11 
> [ERROR] PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure:347 Expected 
> equality for V2, but null!=1 
> [ERROR] PartialIndexRebuilderIT.testWriteWhileRebuilding:396 Expected 
> equality for V2, but null!=2 
> {code}
> testDeleteAndUpsertAfterFailure and testWriteWhileRebuilding pass for me 
> locally just before PHOENIX-4076 was committed. 
> testConcurrentUpsertsWithRebuild fails with the following exception at the 
> commit before PHOENIX-4076 .
> {code:java}
> 2018-04-27 16:14:48,049 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(1089): 
> IOException during rebuilding: 
> org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out waiting for 
> lock for row: 80 00 00 01 80 00 00 00
>   at 
> org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:370)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3190)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2976)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2918)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1074)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:369)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2018-04-27 16:14:48,051 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069: callId: 1941 
> service: ClientService methodName: Scan size: 40 connection: 127.0.0.1:14017
> org.apache.hadoop.hbase.UnknownScannerException: Throwing 
> UnknownScannerException to reset the client scanner state for clients older 
> than 1.3.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2893)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at 

[jira] [Commented] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2018-04-30 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16458377#comment-16458377
 ] 

Ohad Shacham commented on PHOENIX-4484:
---

[~jamestaylor], I think that I was wrong in this case and disabling the GC is 
not required. A general transaction might miss data if the low watermark 
exceeds the transaction timestamp during its run. This caused by the GC that 
removes all the versions of the key below the low watermark, except for the 
last one.  During index population, the transaction has the fence id and it 
writes the data using auto commit (version and commit timestamp are the same) 
and does not need to commit. 

It is true that this transaction might miss data if the low watermark exceeds 
the fence id, however, if it misses data of a key K, it means that there exists 
another record of K with a version higher than the fence and lower than the low 
watermark. Because every entry written after the fence will be automatically 
added to the index (using the incremental mechanism) then the entry of K will 
be added to the index as well. It is true that we miss data, however, every 
transaction that might be interested in this data started below the low 
watermark and will be aborted on commit, so we don't really care. 

To sum up, the fact that at the fence, we enable the mechanism that updates the 
index with every mutation to the data table. Removes the need to disable the GC.

 

> Write directly to HBase when creating an index for transactional table
> --
>
> Key: PHOENIX-4484
> URL: https://issues.apache.org/jira/browse/PHOENIX-4484
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>Priority: Major
>
> Today, when creating an index table for a non empty data table. The writes 
> are performed using the transaction api and both consumes client side memory, 
> for storing the writeset, and checks for conflict analysis upon commit. This 
> is redundant and can be replaced by direct write to HBase. For this reason, a 
> new function in the transaction abstraction layer should be added that writes 
> directly to HBase at the Tephra's case and adds shadow cells with the fence 
> id at the Omid case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)