[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 breaks client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5104:
---
Summary: PHOENIX-3547 breaks client backwards compatability  (was: 
PHOENIX-3547 break client backwards compatability)

> PHOENIX-3547 breaks client backwards compatability
> --
>
> Key: PHOENIX-5104
> URL: https://issues.apache.org/jira/browse/PHOENIX-5104
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Blocker
>
> Scenario:
> * New 4.15 client
> ** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
> integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
> (pk1, pk2, pk3));}}
> ** {{create local index l1 on ns1.test(v1);}}
> * Old 4.14.x client
> ** {{explain select count\(*) from test t1 where t1.v1 < 0.01;}}
> Result:
> {code}
> 0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where 
> t1.v1 < 0.01;
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
> at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 breaks client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5104:
---
Fix Version/s: 4.15.0

> PHOENIX-3547 breaks client backwards compatability
> --
>
> Key: PHOENIX-5104
> URL: https://issues.apache.org/jira/browse/PHOENIX-5104
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.15.0
>
>
> Scenario:
> * New 4.15 client
> ** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
> integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
> (pk1, pk2, pk3));}}
> ** {{create local index l1 on ns1.test(v1);}}
> * Old 4.14.x client
> ** {{explain select count\(*) from test t1 where t1.v1 < 0.01;}}
> Result:
> {code}
> 0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where 
> t1.v1 < 0.01;
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
> at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 break client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5104:
---
Description: 
Scenario:
* New 4.15 client
** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
(pk1, pk2, pk3));}}
** {{create local index l1 on ns1.test(v1);}}
* Old 4.14.x client
** {{explain select count\(*) from test t1 where t1.v1 < 0.01;}}

Result:
{code}
0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where t1.v1 
< 0.01;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
but had 2 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
at 
org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
at 
org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
at 
org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
at 
org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
at 
org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
at 
org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
at 
org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
at 
org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
at 
org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
...
{code}

  was:
Scenario:
* New 4.15 client
** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
(pk1, pk2, pk3));}}
** {{create local index l1 on ns1.test(v1);}}
* Old 4.14.x client
** {{explain select count(*) from test t1 where t1.v1 < 0.01;}}

Result:
{code}
0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where t1.v1 
< 0.01;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
but had 2 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
at 
org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
at 

[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 break client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5104:
---
Affects Version/s: 4.15.0

> PHOENIX-3547 break client backwards compatability
> -
>
> Key: PHOENIX-5104
> URL: https://issues.apache.org/jira/browse/PHOENIX-5104
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Blocker
>
> Scenario:
> * New 4.15 client
> ** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
> integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
> (pk1, pk2, pk3));}}
> ** {{create local index l1 on ns1.test(v1);}}
> * Old 4.14.x client
> ** {{explain select count(*) from test t1 where t1.v1 < 0.01;}}
> Result:
> {code}
> 0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where 
> t1.v1 < 0.01;
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
> at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 break client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5104:
---
Description: 
Scenario:
* New 4.15 client
** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
(pk1, pk2, pk3));}}
** {{create local index l1 on ns1.test(v1);}}
* Old 4.14.x client
** {{explain select count(*) from test t1 where t1.v1 < 0.01;}}

Result:
{code}
0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where t1.v1 
< 0.01;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
but had 2 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
at 
org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
at 
org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
at 
org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
at 
org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
at 
org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
at 
org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
at 
org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
at 
org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
at 
org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
...
{code}

  was:
Scenario:
* New 4.15 client
** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
(pk1, pk2, pk3));}}
** create local index l1 on ns1.test(v1);
* Old 4.14.x client
** explain select count(*) from test t1 where t1.v1 < 0.01;

Result:
{code}
0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where t1.v1 
< 0.01;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
but had 2 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
at 
org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
at 

[jira] [Created] (PHOENIX-5104) PHOENIX-3547 break client backwards compatability

2019-01-18 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-5104:
--

 Summary: PHOENIX-3547 break client backwards compatability
 Key: PHOENIX-5104
 URL: https://issues.apache.org/jira/browse/PHOENIX-5104
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


Scenario:
* New 4.15 client
** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
(pk1, pk2, pk3));}}
** create local index l1 on ns1.test(v1);
* Old 4.14.x client
** explain select count(*) from test t1 where t1.v1 < 0.01;

Result:
{code}
0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where t1.v1 
< 0.01;
Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
but had 2 (state=22000,code=201)
java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
least 8 bytes, but had 2
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
at 
org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
at 
org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
at 
org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
at 
org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
at 
org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
at 
org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
at 
org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
at 
org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
at 
org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4830:

Fix Version/s: 5.1.0
   4.15.0

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch, 
> PHOENIX-4830-4.x-HBase-1.3.007.patch, PHOENIX-4830-4.x-HBase-1.3.008.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at 
> 

[jira] [Updated] (PHOENIX-5103) Can't create/drop table using 4.14 client against 4.15 server

2019-01-18 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5103:
--
Priority: Blocker  (was: Major)

> Can't create/drop table using 4.14 client against 4.15 server
> -
>
> Key: PHOENIX-5103
> URL: https://issues.apache.org/jira/browse/PHOENIX-5103
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Vincent Poon
>Priority: Blocker
>
> server is running 4.15 commit e3280f
> Connect with 4.14.1 client.  Create table gives this:
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.TableNotFoundException):
>  org.apache.hadoop.hbase.TableNotFoundException: Table 'SYSTEM:CHILD_LINK' 
> was not found, got: SYSTEM:CATALOG.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1362)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1230)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5103) Can't create/drop table using 4.14 client against 4.15 server

2019-01-18 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5103:
-

 Summary: Can't create/drop table using 4.14 client against 4.15 
server
 Key: PHOENIX-5103
 URL: https://issues.apache.org/jira/browse/PHOENIX-5103
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Vincent Poon


server is running 4.15 commit e3280f
Connect with 4.14.1 client.  Create table gives this:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.TableNotFoundException):
 org.apache.hadoop.hbase.TableNotFoundException: Table 'SYSTEM:CHILD_LINK' was 
not found, got: SYSTEM:CATALOG.
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1362)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1230)
at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4494) Fix PhoenixTracingEndToEndIT

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4494:

Fix Version/s: (was: 5.1.0)

> Fix PhoenixTracingEndToEndIT
> 
>
> Key: PHOENIX-4494
> URL: https://issues.apache.org/jira/browse/PHOENIX-4494
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Attachments: PHEONXI-4494.001.patch
>
>
> {code}
> [ERROR] Tests run: 8, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
> 148.175 s <<< FAILURE! - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
> [ERROR] 
> testScanTracingOnServer(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)  
> Time elapsed: 64.484 s  <<< FAILURE!
> java.lang.AssertionError: Didn't get expected updates to trace table
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testScanTracingOnServer(PhoenixTracingEndToEndIT.java:304)
> [ERROR] 
> testClientServerIndexingTracing(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)
>   Time elapsed: 22.346 s  <<< FAILURE!
> java.lang.AssertionError: Never found indexing updates
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testClientServerIndexingTracing(PhoenixTracingEndToEndIT.java:193)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4482) Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4482:

Fix Version/s: (was: 4.15.0)

> Fix WALReplayWithIndexWritesAndCompressedWALIT failing with ClassCastException
> --
>
> Key: PHOENIX-4482
> URL: https://issues.apache.org/jira/browse/PHOENIX-4482
> Project: Phoenix
>  Issue Type: Test
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Attachments: PHOENIX-4482.patch
>
>
> {noformat}
> ERROR] 
> testReplayEditsWrittenViaHRegion(org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT)
>   Time elapsed: 82.455 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL cannot be cast to 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.createWAL(WALReplayWithIndexWritesAndCompressedWALIT.java:274)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT.testReplayEditsWrittenViaHRegion(WALReplayWithIndexWritesAndCompressedWALIT.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1983) Document how to turn trace on/off and set sampling rate through SQL query

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1983:

Fix Version/s: (was: 4.15.0)

> Document how to turn trace on/off and set sampling rate through SQL query
> -
>
> Key: PHOENIX-1983
> URL: https://issues.apache.org/jira/browse/PHOENIX-1983
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4794) PhoenixStorageHandler broken with Hive 3.1

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4794:

Fix Version/s: (was: 5.1.0)
   connectors-1.0.0

> PhoenixStorageHandler broken with Hive 3.1
> --
>
> Key: PHOENIX-4794
> URL: https://issues.apache.org/jira/browse/PHOENIX-4794
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Fix For: connectors-1.0.0
>
> Attachments: PHOENIX-4794.001.patch
>
>
> [~jcamachorodriguez] put together a nice patch on the heels of HIVE-12192 
> (date/timestamp handling in Hive) which fixes Phoenix. Without this patch, 
> we'll see both compilation and runtime failures in the PhoenixStorageHandler 
> with Hive 3.1.0-SNAPSHOT.
> Sadly, we need to wait for a Hive 3.1.0 to get this shipped in Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4375) Replace deprecated or changed Scan methods with new APIs

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4375:

Fix Version/s: (was: 4.15.0)

> Replace deprecated or changed Scan methods with new APIs
> 
>
> Key: PHOENIX-4375
> URL: https://issues.apache.org/jira/browse/PHOENIX-4375
> Project: Phoenix
>  Issue Type: Task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4741) Shade disruptor dependency

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4741:

Fix Version/s: (was: 4.15.0)

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>Assignee: Ankit Singhal
>Priority: Major
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2083) Pig maps splits are very uneven

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2083.
-
Resolution: Cannot Reproduce

> Pig maps splits are very uneven
> ---
>
> Key: PHOENIX-2083
> URL: https://issues.apache.org/jira/browse/PHOENIX-2083
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
>Reporter: Brian Johnson
>Priority: Major
>  Labels: verify
>
> When running a pig job on MR with the Phoenix loader we got about 75 maps 
> tasks, but there was huge amount of skew in how the records were allocated 
> and the vast majority of them went to about 20 mappers and 5 got nothing at 
> all. 
> Task
> Value
> task_1433431098673_66646_m_42 0
> task_1433431098673_66646_m_57 0
> task_1433431098673_66646_m_61 0
> task_1433431098673_66646_m_67 0
> task_1433431098673_66646_r_00 0
> task_1433431098673_66646_m_31 127242
> task_1433431098673_66646_m_26 130669
> task_1433431098673_66646_m_17 179685
> task_1433431098673_66646_m_68 190741
> task_1433431098673_66646_m_40 191062
> task_1433431098673_66646_m_56 191509
> task_1433431098673_66646_m_53 191518
> task_1433431098673_66646_m_60 191560
> task_1433431098673_66646_m_48 191579
> task_1433431098673_66646_m_41 191623
> task_1433431098673_66646_m_47 191686
> task_1433431098673_66646_m_65 191720
> task_1433431098673_66646_m_64 191726
> task_1433431098673_66646_m_54 191763
> task_1433431098673_66646_m_66 191871
> task_1433431098673_66646_m_52 191875
> task_1433431098673_66646_m_45 191908
> task_1433431098673_66646_m_49 191914
> task_1433431098673_66646_m_63 192124
> task_1433431098673_66646_m_58 192352
> task_1433431098673_66646_m_69 192352
> task_1433431098673_66646_m_44 192519
> task_1433431098673_66646_m_07 529769
> task_1433431098673_66646_m_18 584940
> task_1433431098673_66646_m_05 585864
> task_1433431098673_66646_m_03 697683
> task_1433431098673_66646_m_16 709321
> task_1433431098673_66646_m_08 710190
> task_1433431098673_66646_m_04 710774
> task_1433431098673_66646_m_11 711818
> task_1433431098673_66646_m_38 713862
> task_1433431098673_66646_m_37 714577
> task_1433431098673_66646_m_22 716796
> task_1433431098673_66646_m_14 717478
> task_1433431098673_66646_m_25 722809
> task_1433431098673_66646_m_30 723182
> task_1433431098673_66646_m_24 723378
> task_1433431098673_66646_m_13 731836
> task_1433431098673_66646_m_10 732525
> task_1433431098673_66646_m_01 734611
> task_1433431098673_66646_m_36 739874
> task_1433431098673_66646_m_72 1810925
> task_1433431098673_66646_m_39 1923212
> task_1433431098673_66646_m_59 2014210
> task_1433431098673_66646_m_55 2287499
> task_1433431098673_66646_m_74 2887750
> task_1433431098673_66646_m_73 3049942
> task_1433431098673_66646_m_29 3156535
> task_1433431098673_66646_m_71 3841375
> task_1433431098673_66646_m_27 4001882
> task_1433431098673_66646_m_51 4343619
> task_1433431098673_66646_m_34 5363718
> task_1433431098673_66646_m_50 7734798
> task_1433431098673_66646_m_20 9543930
> task_1433431098673_66646_m_70 10058382
> task_1433431098673_66646_m_46 10143291
> task_1433431098673_66646_m_62 10263757
> task_1433431098673_66646_m_32 10908072
> task_1433431098673_66646_m_15 11182800
> task_1433431098673_66646_m_00 11300385
> task_1433431098673_66646_m_43 11359327
> task_1433431098673_66646_m_21 12632598
> task_1433431098673_66646_m_09 14598258
> task_1433431098673_66646_m_28 14698359
> task_1433431098673_66646_m_33 16407474
> task_1433431098673_66646_m_12 17944269
> task_1433431098673_66646_m_23 20568188
> task_1433431098673_66646_m_35 21656353
> task_1433431098673_66646_m_02 27413291
> task_1433431098673_66646_m_06 35573698
> task_1433431098673_66646_m_19 35717128



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2083) Pig maps splits are very uneven

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2083:

Fix Version/s: (was: 4.15.0)

> Pig maps splits are very uneven
> ---
>
> Key: PHOENIX-2083
> URL: https://issues.apache.org/jira/browse/PHOENIX-2083
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
>Reporter: Brian Johnson
>Priority: Major
>  Labels: verify
>
> When running a pig job on MR with the Phoenix loader we got about 75 maps 
> tasks, but there was huge amount of skew in how the records were allocated 
> and the vast majority of them went to about 20 mappers and 5 got nothing at 
> all. 
> Task
> Value
> task_1433431098673_66646_m_42 0
> task_1433431098673_66646_m_57 0
> task_1433431098673_66646_m_61 0
> task_1433431098673_66646_m_67 0
> task_1433431098673_66646_r_00 0
> task_1433431098673_66646_m_31 127242
> task_1433431098673_66646_m_26 130669
> task_1433431098673_66646_m_17 179685
> task_1433431098673_66646_m_68 190741
> task_1433431098673_66646_m_40 191062
> task_1433431098673_66646_m_56 191509
> task_1433431098673_66646_m_53 191518
> task_1433431098673_66646_m_60 191560
> task_1433431098673_66646_m_48 191579
> task_1433431098673_66646_m_41 191623
> task_1433431098673_66646_m_47 191686
> task_1433431098673_66646_m_65 191720
> task_1433431098673_66646_m_64 191726
> task_1433431098673_66646_m_54 191763
> task_1433431098673_66646_m_66 191871
> task_1433431098673_66646_m_52 191875
> task_1433431098673_66646_m_45 191908
> task_1433431098673_66646_m_49 191914
> task_1433431098673_66646_m_63 192124
> task_1433431098673_66646_m_58 192352
> task_1433431098673_66646_m_69 192352
> task_1433431098673_66646_m_44 192519
> task_1433431098673_66646_m_07 529769
> task_1433431098673_66646_m_18 584940
> task_1433431098673_66646_m_05 585864
> task_1433431098673_66646_m_03 697683
> task_1433431098673_66646_m_16 709321
> task_1433431098673_66646_m_08 710190
> task_1433431098673_66646_m_04 710774
> task_1433431098673_66646_m_11 711818
> task_1433431098673_66646_m_38 713862
> task_1433431098673_66646_m_37 714577
> task_1433431098673_66646_m_22 716796
> task_1433431098673_66646_m_14 717478
> task_1433431098673_66646_m_25 722809
> task_1433431098673_66646_m_30 723182
> task_1433431098673_66646_m_24 723378
> task_1433431098673_66646_m_13 731836
> task_1433431098673_66646_m_10 732525
> task_1433431098673_66646_m_01 734611
> task_1433431098673_66646_m_36 739874
> task_1433431098673_66646_m_72 1810925
> task_1433431098673_66646_m_39 1923212
> task_1433431098673_66646_m_59 2014210
> task_1433431098673_66646_m_55 2287499
> task_1433431098673_66646_m_74 2887750
> task_1433431098673_66646_m_73 3049942
> task_1433431098673_66646_m_29 3156535
> task_1433431098673_66646_m_71 3841375
> task_1433431098673_66646_m_27 4001882
> task_1433431098673_66646_m_51 4343619
> task_1433431098673_66646_m_34 5363718
> task_1433431098673_66646_m_50 7734798
> task_1433431098673_66646_m_20 9543930
> task_1433431098673_66646_m_70 10058382
> task_1433431098673_66646_m_46 10143291
> task_1433431098673_66646_m_62 10263757
> task_1433431098673_66646_m_32 10908072
> task_1433431098673_66646_m_15 11182800
> task_1433431098673_66646_m_00 11300385
> task_1433431098673_66646_m_43 11359327
> task_1433431098673_66646_m_21 12632598
> task_1433431098673_66646_m_09 14598258
> task_1433431098673_66646_m_28 14698359
> task_1433431098673_66646_m_33 16407474
> task_1433431098673_66646_m_12 17944269
> task_1433431098673_66646_m_23 20568188
> task_1433431098673_66646_m_35 21656353
> task_1433431098673_66646_m_02 27413291
> task_1433431098673_66646_m_06 35573698
> task_1433431098673_66646_m_19 35717128



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3919) Add hbase-hadoop2-compat as compile time dependency

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3919:

Fix Version/s: (was: 4.15.0)

> Add hbase-hadoop2-compat as compile time dependency
> ---
>
> Key: PHOENIX-3919
> URL: https://issues.apache.org/jira/browse/PHOENIX-3919
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-3819.patch
>
>
> HBASE-17448 added hbase-hadoop2-compat as a required dependency for clients, 
> but it is currently a test only dependency in some Phoenix modules.
> Make it an explicit compile time dependency in those modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-1716) queries with to_char() return incorrect results

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1716.
-
   Resolution: Not A Problem
Fix Version/s: (was: 4.15.0)

> queries with to_char() return incorrect results
> ---
>
> Key: PHOENIX-1716
> URL: https://issues.apache.org/jira/browse/PHOENIX-1716
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jonathan Leech
>Priority: Major
>  Labels: verify
>
> Using to_char() in a nested query causes it to get confused and return the 
> value of the first column twice. 
> Example:
> select 
>  to_char(case when foo < 0 then 18446744073709551616.0 + foo else foo end, 
> '0') foo,
>  to_char(case when bar < 0 then 18446744073709551616.0 + bar else bar end, 
> '0') bar
> from (
> select cast(12345 as bigint) foo, cast(233456 as bigint) bar  from 
> system."SEQUENCE" limit 1
> );
> Workarounds: Use different but equivalent format strings in each column, make 
> subtle changes to the beginning of the columns; e.g. change to 0 + case 
> when..., 0.0 + case when



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1936) Create a gold file test that ensures the order enums in ExpressionType does not change to ensure b/w compat

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1936:

Fix Version/s: (was: 4.15.0)

> Create a gold file test that ensures the order enums in ExpressionType does 
> not change to ensure b/w compat
> ---
>
> Key: PHOENIX-1936
> URL: https://issues.apache.org/jira/browse/PHOENIX-1936
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1936) Create a gold file test that ensures the order enums in ExpressionType does not change to ensure b/w compat

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1936:

Labels: newbie  (was: )

> Create a gold file test that ensures the order enums in ExpressionType does 
> not change to ensure b/w compat
> ---
>
> Key: PHOENIX-1936
> URL: https://issues.apache.org/jira/browse/PHOENIX-1936
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Priority: Major
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2736) Fix possible data loss with local indexes when there are splits during bulkload

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2736:

Fix Version/s: (was: 4.15.0)

> Fix possible data loss with local indexes when there are splits during 
> bulkload
> ---
>
> Key: PHOENIX-2736
> URL: https://issues.apache.org/jira/browse/PHOENIX-2736
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently when there are splits during bulkload then LoadIncrementalHFiles 
> move full HFile to first daughter region instead of properly spitting the 
> HFile to two daughter region and also we may not properly replace the region 
> start key if there are merges during bulkload. To fix this we can make 
> HalfStoreFileReader configurable in LoadIncrementalHFiles and use 
> IndexHalfStoreFileReader for local indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3536) Remove creating unnecessary phoenix connections in MR Tasks of Hive

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3536:

Fix Version/s: (was: 4.15.0)

> Remove creating unnecessary phoenix connections in MR Tasks of Hive
> ---
>
> Key: PHOENIX-3536
> URL: https://issues.apache.org/jira/browse/PHOENIX-3536
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>Priority: Major
>  Labels: HivePhoenix
> Attachments: PHOENIX-3536.1.patch
>
>
> PhoenixStorageHandler creates phoenix connections to make QueryPlan in 
> getSplit phase(prepare MR) and getRecordReader phase(Map) while running MR 
> Job.
> in phoenix, it spends too many times to create the first phoenix 
> connection(QueryServices) for specific URL. (checking and loading phoenix 
> schema information)
> i found it is possible to remove creating query plan again in Map 
> phase(getRecordReader()) by serializing QueryPlan created from Input format 
> ans passing this plan to record reader. 
>  this approach improves scan performance by removing trying to unnecessary 
> connection in map phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3876) Do not retry index updates until INDEX_DISABLE_TIMESTAMP is cleared

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3876:

Fix Version/s: (was: 4.15.0)

> Do not retry index updates until INDEX_DISABLE_TIMESTAMP is cleared
> ---
>
> Key: PHOENIX-3876
> URL: https://issues.apache.org/jira/browse/PHOENIX-3876
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-3876-wip.patch
>
>
> Given the retry logic of HBase, if we continue to make index updates after an 
> index write failure, we'll end up essentially blocking writes to the data 
> table. Instead, we can simply stop issuing index updates until the partial 
> rebuild has completed since we know then that the index table is back online.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3344) Indicate local index usage in explain plan

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3344:

Fix Version/s: (was: 4.15.0)

> Indicate local index usage in explain plan
> --
>
> Key: PHOENIX-3344
> URL: https://issues.apache.org/jira/browse/PHOENIX-3344
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>Priority: Minor
>
> The query plan is not clear at all that a local indexes is being used: 
> {code}
> WAY RANGE SCAN OVER my_table [1, 'foo']
> {code}
> Instead, we should show something like this:
> {code}
> WAY RANGE SCAN OVER LOCAL INDEX my_local_index ON my_table [1, 'foo']
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3828) Local Index - WrongRegionException when selecting column from base table and filtering on indexed column

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3828:

Fix Version/s: (was: 4.15.0)

> Local Index - WrongRegionException when selecting column from base table and 
> filtering on indexed column
> 
>
> Key: PHOENIX-3828
> URL: https://issues.apache.org/jira/browse/PHOENIX-3828
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> {noformat}
> Caused by: org.apache.hadoop.hbase.regionserver.WrongRegionException: 
> Requested row out of range for Get on HRegion 
> T,00Dxx001gES005001xx03DGQX\x7F\xFF\xFE\xB6\xE7(\x91\xDF017526052jdM  
>  ,1493854066165.f1f58ac91adc762ad3e22e7f0ae1d85e., 
> startKey='00Dxx001gES005001xx03DGQX\x7F\xFF\xFE\xB6\xE7(\x91\xDF017526052jdM
>', getEndKey()='', 
> row='\x00\x02a05001xx03DGQX\x7F\xFF\xFE\xB6\xE30\xFD\x970171318362Rz   '
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5246)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6990)
>   at 
> org.apache.phoenix.util.IndexUtil.wrapResultUsingOffset(IndexUtil.java:529)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.nextRaw(BaseScannerRegionObserver.java:500)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:283)
> {noformat}
> This is caused when a non-index column is part of select statement while 
> filtering on an indexed column: {{SELECT STD_COL FROM T WHERE INDEXED_COL < 
> 1}}.
> Schema
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
>  PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, STD_COL 
> VARCHAR, INDEXED_COL INTEGER,
>  CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI)) 
>  VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (INDEXED_COL);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3771) Phoenix Storage Handler with Hive on Spark

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3771:

Fix Version/s: (was: 4.15.0)

> Phoenix Storage Handler with Hive on Spark
> --
>
> Key: PHOENIX-3771
> URL: https://issues.apache.org/jira/browse/PHOENIX-3771
> Project: Phoenix
>  Issue Type: Improvement
> Environment: Hadoop 2.7.3 HBase 1.1.4 Phoenix 4.10.0 Spark 1.6.3/2.1.0
>Reporter: Sudhir Babu Pothineni
>Assignee: Sergey Soldatov
>Priority: Major
>
> We are working on join Hive table with Pheonix, Spark enabled. Right now its 
> hit a wall with following exception.
> org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
> java.io.IOException: spark execution engine unsupported yet.
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:352)
> at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:220)
> at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:685)
> at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
> at com.sun.proxy.$Proxy22.fetchResults(Unknown Source)
> at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:454)
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:672)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: java.io.IOException: spark execution engine 
> unsupported yet.
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:507)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:414)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1670)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:347)
> ... 24 more
> Caused by: java.io.IOException: spark execution engine unsupported yet.
> at 
> org.apache.phoenix.hive.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:128)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:362)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:445)
> ... 28 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2757) Phoenix Can't Coerce String to Boolean

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2757:

Fix Version/s: (was: 4.15.0)

> Phoenix Can't Coerce String to Boolean
> --
>
> Key: PHOENIX-2757
> URL: https://issues.apache.org/jira/browse/PHOENIX-2757
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Aaron Stephens
>Priority: Major
>
> In the process of trying to UPSERT rows with Phoenix via Nifi, I've run into 
> the following:
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}
> It appears that Phoenix currently does not know how to coerce a String into a 
> Boolean (see 
> [here|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PBoolean.java#L124-L137]).
>   This is a feature that's present in other drivers such as PostgreSQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3179) Trim or remove hadoop-common dependency fat from thin-client jar

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3179:

Fix Version/s: (was: 4.15.0)

> Trim or remove hadoop-common dependency fat from thin-client jar
> 
>
> Key: PHOENIX-3179
> URL: https://issues.apache.org/jira/browse/PHOENIX-3179
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
>
> 4.8.0 brought in hadoop-common, pretty much for Configuration and 
> UserGroupInformation, to the thin-client shaded jar.
> This ends up really bloating the size of the artifact which is annoying. We 
> should be able to exclude some of the transitive dependencies which will 
> reduce the size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-1648) Extra scan being issued while doing SELECT COUNT(*) queries

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-1648:
-

> Extra scan being issued while doing SELECT COUNT(*) queries
> ---
>
> Key: PHOENIX-1648
> URL: https://issues.apache.org/jira/browse/PHOENIX-1648
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
>  Labels: verify
>
> On turning tracing on and executing SELECT COUNT(*) queries, I am seeing an 
> extra scan being executed every time. 
> CREATE TABLE MY_TABLE (ID INTEGER NOT NULL PRIMARY KEY, VALUE INTEGER) 
> SALT_BUCKETS = 16
> SELECT COUNT(*) FROM MY_TABLE
> The trace table has:
> Creating basic query for [CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER 
> MY_TABLE, SERVER FILTER BY FIRST KEY ONLY, SERVER AGGREGATE INTO 
> SINGLE ROW]
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Creating basic query for [CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER 
> SYSTEM.CATALOG [null,null,'MY_TABLE',not null],SERVER FILTER BY 
> COLUMN_FAMILY IS NULL]
> Parallel scanner for table: SYSTEM.CATALOG
> While the 16 scanners being created for MY_TABLE is expected, the extra 
> scanner for SYSTEM.CATALOG isn't. This is happening consistently, so this 
> likely isn't happening because of cache expiration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3366) Local index queries fail with many guideposts if split occurs

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3366:

Fix Version/s: (was: 4.15.0)

> Local index queries fail with many guideposts if split occurs
> -
>
> Key: PHOENIX-3366
> URL: https://issues.apache.org/jira/browse/PHOENIX-3366
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> In getting guideposts to be correctly built for local indexes (PHOENIX-3361), 
> I accidentally configured many guideposts in the 
> IndexExtendedIT.testLocalIndexScanAfterRegionSplit() test and the test 
> started failing. We should confirm there's not a general issue lurking here. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1648) Extra scan being issued while doing SELECT COUNT(*) queries

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1648:

Fix Version/s: (was: 4.15.0)

> Extra scan being issued while doing SELECT COUNT(*) queries
> ---
>
> Key: PHOENIX-1648
> URL: https://issues.apache.org/jira/browse/PHOENIX-1648
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
>  Labels: verify
>
> On turning tracing on and executing SELECT COUNT(*) queries, I am seeing an 
> extra scan being executed every time. 
> CREATE TABLE MY_TABLE (ID INTEGER NOT NULL PRIMARY KEY, VALUE INTEGER) 
> SALT_BUCKETS = 16
> SELECT COUNT(*) FROM MY_TABLE
> The trace table has:
> Creating basic query for [CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER 
> MY_TABLE, SERVER FILTER BY FIRST KEY ONLY, SERVER AGGREGATE INTO 
> SINGLE ROW]
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Creating basic query for [CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER 
> SYSTEM.CATALOG [null,null,'MY_TABLE',not null],SERVER FILTER BY 
> COLUMN_FAMILY IS NULL]
> Parallel scanner for table: SYSTEM.CATALOG
> While the 16 scanners being created for MY_TABLE is expected, the extra 
> scanner for SYSTEM.CATALOG isn't. This is happening consistently, so this 
> likely isn't happening because of cache expiration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-2544) Update phoenix-spark PhoenixRecordWritable to use phoenix-core implementation

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-2544:
---

Assignee: Thomas D'Silva  (was: Josh Mahonin)

> Update phoenix-spark PhoenixRecordWritable to use phoenix-core implementation
> -
>
> Key: PHOENIX-2544
> URL: https://issues.apache.org/jira/browse/PHOENIX-2544
> Project: Phoenix
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0
>
>
> There's a number of implementations of PhoenixRecordWritable strewn about. We 
> should consolidate them and reuse code. See discussion on PHOENIX-2492.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-1648) Extra scan being issued while doing SELECT COUNT(*) queries

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1648.
-
Resolution: Cannot Reproduce

> Extra scan being issued while doing SELECT COUNT(*) queries
> ---
>
> Key: PHOENIX-1648
> URL: https://issues.apache.org/jira/browse/PHOENIX-1648
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
>  Labels: verify
> Fix For: 4.15.0
>
>
> On turning tracing on and executing SELECT COUNT(*) queries, I am seeing an 
> extra scan being executed every time. 
> CREATE TABLE MY_TABLE (ID INTEGER NOT NULL PRIMARY KEY, VALUE INTEGER) 
> SALT_BUCKETS = 16
> SELECT COUNT(*) FROM MY_TABLE
> The trace table has:
> Creating basic query for [CLIENT 16-CHUNK PARALLEL 16-WAY FULL SCAN OVER 
> MY_TABLE, SERVER FILTER BY FIRST KEY ONLY, SERVER AGGREGATE INTO 
> SINGLE ROW]
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Parallel scanner for table: MY_TABLE
> Creating basic query for [CLIENT 1-CHUNK PARALLEL 1-WAY RANGE SCAN OVER 
> SYSTEM.CATALOG [null,null,'MY_TABLE',not null],SERVER FILTER BY 
> COLUMN_FAMILY IS NULL]
> Parallel scanner for table: SYSTEM.CATALOG
> While the 16 scanners being created for MY_TABLE is expected, the extra 
> scanner for SYSTEM.CATALOG isn't. This is happening consistently, so this 
> likely isn't happening because of cache expiration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4568) Duplicate entries in the GroupBy structure when running AggregateIT.testTrimDistinct

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4568:

Fix Version/s: (was: 4.15.0)

> Duplicate entries in the GroupBy structure when running 
> AggregateIT.testTrimDistinct
> 
>
> Key: PHOENIX-4568
> URL: https://issues.apache.org/jira/browse/PHOENIX-4568
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
>
> AggregateIT.testTrimDistinct case is introduced in the fix of  PHOENIX-4139.
> Trace-debugging the test reveals that the GroupBy class may store duplicates 
> of accessor objects in its list fields, keyExpressions and expressions.
> "Since the second trim expression is the same a the first one, the group by 
> (district turns into a group by) of the second one should be ignored as it 
> serves no purpose. That is what occurs when you do a select without the 
> distinct. Perhaps this logic is missing from GroupByCompiler?" 
> https://issues.apache.org/jira/browse/PHOENIX-4139?focusedCommentId=16274531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274531
> I have not yet found any test case in which this internal behavior would 
> cause an error but still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1938) Like operator should throw proper exception when it is used with data type other then Varcahr and Char

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1938:

Fix Version/s: (was: 4.15.0)

> Like operator should throw proper exception when it is used with data type 
> other then Varcahr and Char
> --
>
> Key: PHOENIX-1938
> URL: https://issues.apache.org/jira/browse/PHOENIX-1938
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Aakash Pradeep
>Priority: Minor
>  Labels: newbie, verify
>
> Currently when "Like" operator is used with Integer it throws 
> ClassCastException instead of Saying that it is not supported.
> select * from  where 1 like 1;
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.String
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:471)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:134)
> at 
> org.apache.phoenix.parse.LikeParseNode.accept(LikeParseNode.java:62)
> at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:130)
> at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:100)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:496)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:449)
> at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:161)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:344)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:327)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:237)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:232)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:231)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1097)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> select * from JSON_PK where '1' like 1;
> Error: ERROR 203 (22005): Type mismatch. VARCHAR and INTEGER for '1' LIKE 1 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR and INTEGER for '1' LIKE 1
> at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:462)
> at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:134)
> at 
> org.apache.phoenix.parse.LikeParseNode.accept(LikeParseNode.java:62)
> at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:130)
> at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:100)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:496)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:449)
> at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:161)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:344)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:327)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:237)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:232)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:231)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1097)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from JSON_PK 

[jira] [Updated] (PHOENIX-4545) Revisit preGetTable(), preGetSchema() and preAlterTable() hooks for PhoenixAccessController

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4545:

Fix Version/s: (was: 4.15.0)

> Revisit preGetTable(), preGetSchema() and preAlterTable() hooks for 
> PhoenixAccessController
> ---
>
> Key: PHOENIX-4545
> URL: https://issues.apache.org/jira/browse/PHOENIX-4545
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>
> preGetTable(): we should be checking only read access on a table(not access 
> required for getTableDescriptor which expects Admin or Create)
> preGetSchema(): read access on namespace should be checked.
> preAlterTable(): write access to table should be checked only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3928) Consider retrying once after any SQLException

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3928:

Fix Version/s: (was: 4.15.0)

> Consider retrying once after any SQLException
> -
>
> Key: PHOENIX-3928
> URL: https://issues.apache.org/jira/browse/PHOENIX-3928
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
>Priority: Major
>
> There are more cases in which a retry would successfully execute than when a 
> MetaDataEntityNotFoundException. For example, certain error cases that depend 
> on the state of the metadata would work on retry if the metadata had changed. 
> We may want to retry on any SQLException and simply loop through the tables 
> involved (plan.getSourceRefs().iterator()), and if any meta data was updated, 
> go ahead and retry once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2923) Setting autocommit has no effect in sqlline

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2923:

Fix Version/s: (was: 4.15.0)

> Setting autocommit has no effect in sqlline
> ---
>
> Key: PHOENIX-2923
> URL: https://issues.apache.org/jira/browse/PHOENIX-2923
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> Using the {{!set autocommit false}} has no effect in sqlline. Likely a 
> sqlline bug, but needs to be verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2454) Upsert with Double.NaN returns NumberFormatException

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2454:

Fix Version/s: (was: 4.15.0)

> Upsert with Double.NaN returns NumberFormatException
> 
>
> Key: PHOENIX-2454
> URL: https://issues.apache.org/jira/browse/PHOENIX-2454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: alex kamil
>Priority: Minor
>  Labels: newbie
>
> When saving Double.NaN via prepared statement into column of type Double 
> getting NumberFormatException (while expected behavior is saving null)
> test case:
> {code}
> import java.sql.*;
> public static void main(String [] args){
>   try {
> Connection phoenixConnection = 
> DriverManager.getConnection("jdbc:phoenix:localhost");
> String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
> col1 double, col2 double)";
> Statement stmt = phoenixConnection.createStatement();
> stmt.executeUpdate(sql);
> phoenixConnection.commit();
> 
> sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
> PreparedStatement ps = phoenixConnection.prepareStatement(sql);
> ps.setInt(1, 12);
> ps.setDouble(2, 2.5);
> ps.setDouble(3, Double.NaN);
> ps.executeUpdate();
> phoenixConnection.commit();
> phoenixConnection.close();
> } catch (Exception e) {
>e.printStackTrace();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2028) Improve performance of write path

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2028:

Labels: YARN-TLS perf  (was: YARN-TLS)

> Improve performance of write path
> -
>
> Key: PHOENIX-2028
> URL: https://issues.apache.org/jira/browse/PHOENIX-2028
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: YARN-TLS, perf
> Fix For: 4.15.0
>
>
> The following improvements can be made to bring the cost of UPSERT VALUES 
> more inline with direct HBase API usage:
> - don't re-compile a prepared UPSERT VALUES statement that is re-executed 
> (see patch on PHOENIX-1711).
> - change MutationState to use a List instead of a Map at the top level. It's 
> ok to have duplicate rows here, as they'll get folded together when we 
> generate the List.
> - change each mutation in the list to be a simple List. We can keep a 
> pointer to the PTable and a List of positions into the PTable columns 
> instead of maintaining a Map for each row. Again, this will get folded 
> together when we generate the List.
> - we don't need to create Mutations for 
> PhoenixRuntime.getUncommittedDataIterator() and it appears we don't need to 
> sort (though we should verify that). Instead, we'll just generate a 
> List for each row in MutationState, allowing duplicate and 
> out-of-order row keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1342) Evaluate array length at regionserver coprocessor

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1342:

Fix Version/s: (was: 4.15.0)

> Evaluate array length at regionserver coprocessor
> -
>
> Key: PHOENIX-1342
> URL: https://issues.apache.org/jira/browse/PHOENIX-1342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Priority: Minor
>
> Length of an array should be evaluated on server site to prevent network 
> traffic on big arrays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1386) ANY function only works with absolute value and doesn't work with other parameters

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1386:

Fix Version/s: (was: 4.15.0)

> ANY function only works with absolute value and doesn't work with other 
> parameters  
> 
>
> Key: PHOENIX-1386
> URL: https://issues.apache.org/jira/browse/PHOENIX-1386
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Mohammadreza
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: features, verify
>
> By this query phoenix doesn't work:
> SELECT *  FROM ac join mat  on  mat.mid=Any(ac.mt);
> ac.mt is an array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2771) Improve the performance of IndexTool by building the index mutations at reducer side

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2771:

Fix Version/s: (was: 4.15.0)

> Improve the performance of IndexTool by building the index mutations at 
> reducer side
> 
>
> Key: PHOENIX-2771
> URL: https://issues.apache.org/jira/browse/PHOENIX-2771
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-2771-1.patch, PHOENIX-2771-2.patch
>
>
> Instead of writing the full index mutations to map output at mapper we can 
> just write combined value of indexed column values and prepare proper key 
> values at reducer same as PHOENIX-1973.
> [~sergey.soldatov] can you take up this?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2210) UPSERT SELECT without FROM clause fails with NPE

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2210:

Fix Version/s: (was: 4.15.0)

> UPSERT SELECT without FROM clause fails with NPE
> 
>
> Key: PHOENIX-2210
> URL: https://issues.apache.org/jira/browse/PHOENIX-2210
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Priority: Major
>  Labels: verify
>
> {code}
> @Test
> public void testUpsertSelectSameRow() throws Exception {
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> conn.createStatement().execute("CREATE TABLE T (PK1 VARCHAR NOT 
> NULL, PK2 VARCHAR NOT NULL, KV1 INTEGER CONSTRAINT PK PRIMARY KEY (PK1, 
> PK2))");
> conn.createStatement().executeUpdate("UPSERT INTO T VALUES 
> ('PK10', 'PK20', 10)");
> conn.createStatement().executeUpdate("UPSERT INTO T VALUES 
> ('PK11', 'PK21', 20)");
> conn.createStatement().executeUpdate("UPSERT INTO T VALUES 
> ('PK12', 'PK22', 30)");
> conn.commit();
> conn.createStatement().executeUpdate("UPSERT INTO T (PK1, PK2, 
> KV1) SELECT PK1, PK2, (KV1 + 100) WHERE PK1 = 'PK10' AND PK2 = 'PK20' ");
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT KV1 
> FROM T WHERE WHERE PK1 = 'PK10' AND PK2 = 'PK20'");
> assertTrue(rs.next());
> assertEquals(110, rs.getInt(1));
> }
> }
> {code}
> {code}
> Exception:
> java.lang.NullPointerException
>   at org.apache.phoenix.schema.TableRef.equals(TableRef.java:115)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:392)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:550)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:1)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:318)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:310)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2370:

Fix Version/s: (was: 4.15.0)

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>Priority: Major
>  Labels: newbie, verify
> Attachments: PHOENIX-2370.patch, PHOENIX-2370_v2.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-1443) after delete from table,the data can not upsert through bulkload by mapreduce

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1443.
-
Resolution: Cannot Reproduce

> after delete from table,the data can not upsert through bulkload by mapreduce 
> --
>
> Key: PHOENIX-1443
> URL: https://issues.apache.org/jira/browse/PHOENIX-1443
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.0
> Environment: phoenix4.1
> hbase0.98.6
> hadoop1.2.1
>Reporter: xufeng
>Assignee: maghamravikiran
>Priority: Major
>  Labels: verify
> Fix For: 4.15.0
>
>
> 1.create table
> 2.upsert data into table through bulkload by mapreduce
> 3.select * from table ->it is ok.
> 4.delete from table,all data be deleted.
> 5.select * from table ,nothing. >ok
> 6.upsert data into table through bulkload by mapreduce again
> 7.select * from table ,nothing->NG
> 8.scan table by hbase shell,nothing>NG
> 9.look up the hfile in the folder of the region belong to the table,it is 
> exist.
> I  check the time across the mr cluster machine. it is correct.
> is this a bug in bulkload?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2028) Improve performance of write path

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2028:

Fix Version/s: (was: 4.15.0)

> Improve performance of write path
> -
>
> Key: PHOENIX-2028
> URL: https://issues.apache.org/jira/browse/PHOENIX-2028
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: YARN-TLS, perf
>
> The following improvements can be made to bring the cost of UPSERT VALUES 
> more inline with direct HBase API usage:
> - don't re-compile a prepared UPSERT VALUES statement that is re-executed 
> (see patch on PHOENIX-1711).
> - change MutationState to use a List instead of a Map at the top level. It's 
> ok to have duplicate rows here, as they'll get folded together when we 
> generate the List.
> - change each mutation in the list to be a simple List. We can keep a 
> pointer to the PTable and a List of positions into the PTable columns 
> instead of maintaining a Map for each row. Again, this will get folded 
> together when we generate the List.
> - we don't need to create Mutations for 
> PhoenixRuntime.getUncommittedDataIterator() and it appears we don't need to 
> sort (though we should verify that). Instead, we'll just generate a 
> List for each row in MutationState, allowing duplicate and 
> out-of-order row keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2501) BatchUpdateExecution typo in name, should extend java.sql.BatchUpdateException

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2501:

Fix Version/s: (was: 4.15.0)

> BatchUpdateExecution typo in name, should extend java.sql.BatchUpdateException
> --
>
> Key: PHOENIX-2501
> URL: https://issues.apache.org/jira/browse/PHOENIX-2501
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Priority: Major
>
> Notice this when my autocomplete went crazy. I think "BatchUpdateExecution" 
> was intended to be "BatchUpdateException". Further, java provides a 
> {{java.sql.BatchUpdateException}}, seems like we should just use that. 
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3157) Refactor DistinctPrefixFilter as filter wrapper so that it can work with non-pk column filters.

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3157:

Fix Version/s: (was: 4.15.0)

> Refactor DistinctPrefixFilter as filter wrapper so that it can work with 
> non-pk column filters.
> ---
>
> Key: PHOENIX-3157
> URL: https://issues.apache.org/jira/browse/PHOENIX-3157
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 3157-DOES_NOT_WORK.txt
>
>
> See PHOENIX-3156. The issue is pretty tricky:
> # only filterKeyValue can make skip decision
> # we're skiping rows (not Cells)
> # the next Cell we skip to is dynamic (not know ahead of time)
> # we can only skip if the row as a whole has not been filtered
> So in order to support non-pk column filters with this optimization (i.e. 
> SELECT DISTINCT(pk1-prefix) FROM table WHERE non-pk-column = xxx) we need to 
> refashion this is FilterWrapper and only fire the optimization when the inner 
> filter did not filter the entire row, this is in many cases hard to 
> determine. It's certainly more complex than the TransactionVisibilityFilter.
> [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2737) Make sure local indexes work properly after fixing region overlaps by HBCK.

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2737:

Fix Version/s: (was: 4.15.0)

> Make sure local indexes work properly after fixing region overlaps by HBCK.
> ---
>
> Key: PHOENIX-2737
> URL: https://issues.apache.org/jira/browse/PHOENIX-2737
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> When there are region overlaps hbck fix by moving hfiles of overlap regions 
> to new region of common key of overlap regions. Then we might not properly 
> replace region start key in HFiles in that case.  In this case we don't have 
> any relation of parent child region in hbase:meta so we cannot identify the 
> start key   in HFiles. To fix this we need to add separator after region 
> start key so that we can easily identify start key in HFile without always 
> touching hbase:meta. So when we create scanners for the Storefiles we can 
> check the region start key in hfile with region start key and if any change 
> we can just replace the old start key with current region start key. During 
> compaction we can properly replace the start key with actual key values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4664) Time Python driver tests failing

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4664:

Fix Version/s: (was: 4.15.0)

> Time Python driver tests failing
> 
>
> Key: PHOENIX-4664
> URL: https://issues.apache.org/jira/browse/PHOENIX-4664
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
>
> {noformat}
> test_time (phoenixdb.tests.test_types.TypesTest) ... FAIL
> test_timestamp (phoenixdb.tests.test_types.TypesTest) ... FAIL
> {noformat}
> These two tests seem to be failing. Ankit thought it might be related to 
> timezones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4111) Add additional logging during Hash Cache preparation to monitor the status

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4111:

Fix Version/s: (was: 4.15.0)

> Add additional logging during Hash Cache preparation to monitor the status
> --
>
> Key: PHOENIX-4111
> URL: https://issues.apache.org/jira/browse/PHOENIX-4111
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>
> Currently it's very difficult to know how much server cache to be set for 
> Hash cache preparation mainly when we enable compression where the table size 
> will be very less compared to uncompressed data. So we can add some logging 
> that how much cache filled and how many rows scanned so that the estimation 
> of server cache can be easily predicted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2340) Index creation on multi tenant table causes exception if tenant ID column referenced

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2340:

Fix Version/s: (was: 4.15.0)

> Index creation on multi tenant table causes exception if tenant ID column 
> referenced
> 
>
> Key: PHOENIX-2340
> URL: https://issues.apache.org/jira/browse/PHOENIX-2340
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> If an index is attempted to be created on a multi-tenant table, an error 
> occurs if the tenant ID column is referenced in the indexed columns. This is 
> because it's already automatically included. However, it should not be an 
> error if the user references it (as long as it's the first indexed column).
> To repro:
> {code}
> CREATE TABLE IF NOT EXISTS T (
> ORGANIZATION_ID CHAR(15) NOT NULL,
> NETWORK_ID CHAR(15) NOT NULL,
> SUBJECT_ID CHAR(15) NOT NULL,
> RUN_ID CHAR(15) NOT NULL,
> SCORE DOUBLE,
> TOPIC_ID CHAR(15) NOT NULL
> CONSTRAINT PK PRIMARY KEY (
> ORGANIZATION_ID,
> NETWORK_ID,
> SUBJECT_ID,
> RUN_ID,
> TOPIC_ID
> )
> ) MULTI_TENANT=TRUE;
> CREATE INDEX IDX ON T (
> ORGANIZATION_ID,
> NETWORK_ID,
> TOPIC_ID,
> RUN_ID,
> SCORE
> ) INCLUDE (
> SUBJECT_ID
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1347) Unit tests fail if default locale is not en_US, at SortOrderExpressionTest.toChar

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-1347:

Fix Version/s: (was: 4.15.0)

> Unit tests fail if default locale is not en_US, at 
> SortOrderExpressionTest.toChar
> -
>
> Key: PHOENIX-1347
> URL: https://issues.apache.org/jira/browse/PHOENIX-1347
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sang-Jin, Park
>Priority: Major
> Attachments: PHOENIX-1347-v2.patch, PHOENIX-1347.patch
>
>
> Failed tests: 
>   
> SortOrderExpressionTest.toChar:148->evaluateAndAssertResult:308->evaluateAndAssertResult:318
>  expected:<12/11/01 12:00 [AM]> but was:<12/11/01 12:00 [오전]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2742) Add new batchmutate APIs in HRegion without mvcc and region level locks

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2742:

Fix Version/s: (was: 4.15.0)

> Add new batchmutate APIs in HRegion without mvcc and region level locks
> ---
>
> Key: PHOENIX-2742
> URL: https://issues.apache.org/jira/browse/PHOENIX-2742
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Enis Soztutar
>Priority: Major
>
> Currently we cannot write mutations to same table in (pre/post)BatchMutate 
> hooks because of mvcc. It would be better to add new API to Region which 
> allows to write to table without locks and also with out memstore size check. 
> Need to see how sequence id's going to effect when the API used in 
> coprocessor hooks. 
> Just raising here to track it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2006) py scripts support for printing its command

2019-01-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2006:

Fix Version/s: (was: 4.15.0)

> py scripts support for printing its command
> ---
>
> Key: PHOENIX-2006
> URL: https://issues.apache.org/jira/browse/PHOENIX-2006
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: PHOENIX-2006.00.patch
>
>
> {{zkServer.sh}} accepts the command {{print-cmd}}, for printing out the java 
> command it would launch. This is pretty handy! Let's reproduce it in 
> {{queryserver.py}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4009) Run UPDATE STATISTICS command by using MR integration on snapshots

2019-01-18 Thread Karan Mehta (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Mehta updated PHOENIX-4009:
-
Attachment: PHOENIX-4009.master.001.patch

> Run UPDATE STATISTICS command by using MR integration on snapshots
> --
>
> Key: PHOENIX-4009
> URL: https://issues.apache.org/jira/browse/PHOENIX-4009
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4009.4.x-HBase-1.4.001.patch, 
> PHOENIX-4009.4.x-HBase-1.4.002.patch, PHOENIX-4009.master.001.patch
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> Now that we have the capability to run queries against table snapshots 
> through our map reduce integration, we can utilize this capability for stats 
> collection too. This would make our stats collection more resilient, resource 
> aware and less resource intensive. The bulk of the plumbing is already in 
> place. We would need to make sure that the integration doesn't barf when the 
> query is an UPDATE STATISTICS command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4936) Empty resultset returned when hbase.rpc.timeout hit

2019-01-18 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan reassigned PHOENIX-4936:
--

Assignee: Xinyi Yan

> Empty resultset returned when hbase.rpc.timeout hit
> ---
>
> Key: PHOENIX-4936
> URL: https://issues.apache.org/jira/browse/PHOENIX-4936
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Xinyi Yan
>Priority: Blocker
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0
>
>
> Seeing this on a large syscat table (~11gb)
> From sqlline, issue a SELECT statement which does a full table scan
> hbase.rpc.timeout gets hit, and instead of getting an exception, an empty 
> resultset is silently returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5069) Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-01-18 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-5069:
-
Attachment: PHOENIX-5069.master.002.patch

> Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache
> ---
>
> Key: PHOENIX-5069
> URL: https://issues.apache.org/jira/browse/PHOENIX-5069
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Attachments: PHOENIX-5069.master.001.patch, 
> PHOENIX-5069.master.002.patch, PHOENIX-5069.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> The current Phoenix Stats Cache uses TTL based eviction policy. A cached 
> entry will expire after a given amount of time (900s by default) passed since 
> the entry's been created. This will lead to cache miss when 
> Compiler/Optimizer fetches stats from cache at the next time. As you can see 
> from the above graph, fetching stats from the cache is a blocking operation — 
> when there is cache miss, it has a round trip over the wire to scan the 
> SYSTEM.STATS Table and to get the latest stats info, rebuild the cache and 
> finally return the stats to the Compiler/Optimizer. Whenever there is a cache 
> miss, this blocking call causes significant performance penalty and see 
> periodic spikes.
> *This Jira suggests to use asynchronous refresh mechanism to provide a 
> non-blocking cache. For details, please see the linked design document below.*
> [~karanmehta93] [~twdsi...@gmail.com] [~dbwong] [~elserj] [~an...@apache.org] 
> [~sergey soldatov] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)