[jira] [Assigned] (PHOENIX-5003) Fix ViewIT.testCreateViewMappedToExistingHbaseTableWithNamespaceMappingEnabled()

2019-12-10 Thread Thomas D'Silva (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5003:
---

Assignee: (was: Thomas D'Silva)

> Fix 
> ViewIT.testCreateViewMappedToExistingHbaseTableWithNamespaceMappingEnabled()
> 
>
> Key: PHOENIX-5003
> URL: https://issues.apache.org/jira/browse/PHOENIX-5003
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Priority: Major
>
> FYI @Daniel Wong, this test is failing consistently on the 1.3 branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5108) Normalize column names while generating SELECT statement in the spark connector

2019-12-10 Thread Thomas D'Silva (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5108:
---

Assignee: (was: Thomas D'Silva)

> Normalize column names while generating SELECT statement in the spark 
> connector
> ---
>
> Key: PHOENIX-5108
> URL: https://issues.apache.org/jira/browse/PHOENIX-5108
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: connectors-1.0.0
>
> Attachments: PHOENIX-5108-4.x-HBase-1.3-v3.patch, 
> PHOENIX-5108-HBase-1.3.patch, PHOENIX-5108-v2-HBase-1.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-2544) Update phoenix-spark PhoenixRecordWritable to use phoenix-core implementation

2019-12-10 Thread Thomas D'Silva (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-2544:
---

Assignee: (was: Thomas D'Silva)

> Update phoenix-spark PhoenixRecordWritable to use phoenix-core implementation
> -
>
> Key: PHOENIX-2544
> URL: https://issues.apache.org/jira/browse/PHOENIX-2544
> Project: Phoenix
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Priority: Major
>
> There's a number of implementations of PhoenixRecordWritable strewn about. We 
> should consolidate them and reuse code. See discussion on PHOENIX-2492.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4905) Remove SpoolingResultIteratorFactory and ChunkedResultIteratorFactory if the renew lease is always used.

2019-10-09 Thread Thomas D'Silva (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4905:

Description: Since we only actively support HBase 1.2 and higher which 
supports renewing scanner leases, we don't need these factories any more if we 
always use the renew lease feature. Figure out if we should turn this on by 
default.  (was: Since we only actively support HBase 1.2 and higher which 
supports renewing scanner leases, we don't need these factories any more.)
Summary: Remove SpoolingResultIteratorFactory and 
ChunkedResultIteratorFactory if the renew lease is always used.  (was: Remove 
SpoolingResultIteratorFactory and ChunkedResultIteratorFactory)

> Remove SpoolingResultIteratorFactory and ChunkedResultIteratorFactory if the 
> renew lease is always used.
> 
>
> Key: PHOENIX-4905
> URL: https://issues.apache.org/jira/browse/PHOENIX-4905
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Major
>  Labels: newbie, phoenix-hardening
>
> Since we only actively support HBase 1.2 and higher which supports renewing 
> scanner leases, we don't need these factories any more if we always use the 
> renew lease feature. Figure out if we should turn this on by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4905) Remove SpoolingResultIteratorFactory and ChunkedResultIteratorFactory if the renew lease is always used.

2019-10-09 Thread Thomas D'Silva (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4905:

Labels:   (was: newbie phoenix-hardening)

> Remove SpoolingResultIteratorFactory and ChunkedResultIteratorFactory if the 
> renew lease is always used.
> 
>
> Key: PHOENIX-4905
> URL: https://issues.apache.org/jira/browse/PHOENIX-4905
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Major
>
> Since we only actively support HBase 1.2 and higher which supports renewing 
> scanner leases, we don't need these factories any more if we always use the 
> renew lease feature. Figure out if we should turn this on by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-31 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: PHOENIX-5403-v5.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5403-v2.patch, PHOENIX-5403-v3.patch, 
> PHOENIX-5403-v4.patch, PHOENIX-5403-v5.patch, PHOENIX-5403.patch, diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 breaks client backwards compatability

2019-07-29 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5104:

Fix Version/s: 5.1.0

> PHOENIX-3547 breaks client backwards compatability
> --
>
> Key: PHOENIX-5104
> URL: https://issues.apache.org/jira/browse/PHOENIX-5104
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Assignee: Mehdi Salarkia
>Priority: Blocker
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5104.4.x-HBase-1.3.v1.patch, PHOENIX-5104.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario:
> * New 4.15 client
> ** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
> integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
> (pk1, pk2, pk3));}}
> ** {{create local index l1 on ns1.test(v1);}}
> * Old 4.14.x client
> ** {{explain select count\(*) from test t1 where t1.v1 < 0.01;}}
> Result:
> {code}
> 0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where 
> t1.v1 < 0.01;
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
> at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-27 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: PHOENIX-5403-v4.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5403-v2.patch, PHOENIX-5403-v3.patch, 
> PHOENIX-5403-v4.patch, PHOENIX-5403.patch, diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-26 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: PHOENIX-5403-v3.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5403-v2.patch, PHOENIX-5403-v3.patch, 
> PHOENIX-5403.patch, diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5416) Fix Array2IT testArrayRefToLiteral

2019-07-26 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5416:
---

 Summary: Fix Array2IT testArrayRefToLiteral
 Key: PHOENIX-5416
 URL: https://issues.apache.org/jira/browse/PHOENIX-5416
 Project: Phoenix
  Issue Type: Test
Reporter: Thomas D'Silva


{{Array2IT.testArrayRefToLiteral}} fails with an NPE

{code}
java.lang.NullPointerException
 at 
org.apache.phoenix.schema.types.PArrayDataTypeDecoder.positionAtArrayElement(PArrayDataTypeDecoder.java:124)
 at 
org.apache.phoenix.schema.types.PArrayDataTypeDecoder.positionAtArrayElement(PArrayDataTypeDecoder.java:45)
 at 
org.apache.phoenix.expression.function.ArrayIndexFunction.evaluate(ArrayIndexFunction.java:64)
 at 
org.apache.phoenix.util.ExpressionUtil.getConstantExpression(ExpressionUtil.java:72)
 at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:348)
 at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:717)
 at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:1)
 at org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:87)
 at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:425)
 at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:637)
 at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:574)
 at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:203)
 at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
 at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:497)
 at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:1)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:291)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:284)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:193)
 at org.apache.phoenix.end2end.Array2IT.testArrayRefToLiteral(Array2IT.java:682)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
 at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:89)
 at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:41)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:541)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:763)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:463)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:209)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-24 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: PHOENIX-5403-v2.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5403-v2.patch, PHOENIX-5403.patch, diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-22 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: PHOENIX-5403.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5403.patch, diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2019-07-19 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5404:
---

 Summary: Move check to client side to see if there are any child 
views that need to be dropped while receating a table/view
 Key: PHOENIX-5404
 URL: https://issues.apache.org/jira/browse/PHOENIX-5404
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Thomas D'Silva


Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, tableName);}} 
call in MetdataEndpointImpl.createTable

While creating a table or view we need to ensure that are not any child views 
that haven't been clean up by the DropChildView task yet. Move this check to 
the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
row exists).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-19 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5403:

Attachment: diff.patch

> Optimize metadata cache lookup of global tables using a tenant specific 
> connection
> --
>
> Key: PHOENIX-5403
> URL: https://issues.apache.org/jira/browse/PHOENIX-5403
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: diff.patch
>
>
> If we use a teanant specific connection to look up a global table we always 
> make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-19 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch, PHOENIX-4893-4.x-HBase-1.3-v3.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v4.patch, PHOENIX-4893-4.x-HBase-1.3-v5.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v6.patch, PHOENIX-4893.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5403) Optimize metadata cache lookup of global tables using a tenant specific connection

2019-07-19 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5403:
---

 Summary: Optimize metadata cache lookup of global tables using a 
tenant specific connection
 Key: PHOENIX-5403
 URL: https://issues.apache.org/jira/browse/PHOENIX-5403
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.2
Reporter: Thomas D'Silva
 Fix For: 4.15.0, 5.1.0


If we use a teanant specific connection to look up a global table we always 
make an rpc to the server even if the UPDATE_CACHE_FREQUENCY is set on the 
table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (PHOENIX-5366) Perform splittable SYSCAT actions on the client only

2019-07-18 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-5366.
-
Resolution: Duplicate

> Perform splittable SYSCAT actions on the client only
> 
>
> Key: PHOENIX-5366
> URL: https://issues.apache.org/jira/browse/PHOENIX-5366
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
>
> With Splittable Syscat we reintroduced new server-to-server communication. We 
> should this all from the client. [~tdsilva].
> Also, sometimes when I run tests they hang. In that case the logs are filled 
> with exceptions like these:
> {code:java}
> 2019-06-23 19:21:05,293 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45747] 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(344): Possibly 
> transient ZooKeeper, quorum=localhost:54696, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/meta-region-server
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1212)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:434)
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:672)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:487)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:168)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:607)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
> at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1268)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1229)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:357)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:231)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:272)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:433)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:307)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1343)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1232)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:1063)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:980)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$200(AsyncProcess.java:667)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:649)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:612)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:922)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:939)
> at org.apache.hadoop.hbase.client.HTableWrapper.batch(HTableWrapper.java:255)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.processRemoteRegionMutations(MetaDataEndpointImpl.java:2757)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2381)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8481)
> {code}
>  



--
This 

[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-15 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v6.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch, PHOENIX-4893-4.x-HBase-1.3-v3.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v4.patch, PHOENIX-4893-4.x-HBase-1.3-v5.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-15 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v5.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch, PHOENIX-4893-4.x-HBase-1.3-v3.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v4.patch, PHOENIX-4893-4.x-HBase-1.3-v5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-4861) While adding a view column make a single RPC to update the encoded column qualifier counter and remove the table from the cache of the physical table

2019-07-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4861:

Description: 
For tables that use column encoding when we add a column to a view we need to 
update the encoded column qualifier counter on the base table. Currently we do 
this in two rpcs:

{code}
// there should only be remote mutations if we are creating 
a view that uses
// encoded column qualifiers (the remote mutations are to 
update the encoded
// column qualifier counter on the parent table)
if (parentTable != null && tableType == PTableType.VIEW && 
parentTable
.getEncodingScheme() != 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS) {
response =
processRemoteRegionMutations(

PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,
remoteMutations, 
MetaDataProtos.MutationCode.UNABLE_TO_UPDATE_PARENT_TABLE);
clearParentTableFromCache(clientTimeStamp,
parentTable.getSchemaName() != null
? parentTable.getSchemaName().getBytes()
: ByteUtil.EMPTY_BYTE_ARRAY,
parentTable.getName().getBytes());
if (response != null) {
done.run(response);
return;
}
}
{code}

Move this code to MetadataClient


  was:
For tables that use column encoding when we add a column to a view we need to 
update the encoded column qualifier counter on the base table. Currently we do 
this in two rpcs:

{code}
// there should only be remote mutations if we are creating 
a view that uses
// encoded column qualifiers (the remote mutations are to 
update the encoded
// column qualifier counter on the parent table)
if (parentTable != null && tableType == PTableType.VIEW && 
parentTable
.getEncodingScheme() != 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS) {
response =
processRemoteRegionMutations(

PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,
remoteMutations, 
MetaDataProtos.MutationCode.UNABLE_TO_UPDATE_PARENT_TABLE);
clearParentTableFromCache(clientTimeStamp,
parentTable.getSchemaName() != null
? parentTable.getSchemaName().getBytes()
: ByteUtil.EMPTY_BYTE_ARRAY,
parentTable.getName().getBytes());
if (response != null) {
done.run(response);
return;
}
}
{code}




> While adding a view column make a single RPC to update the encoded column 
> qualifier counter and remove the table from the cache of the physical table 
> --
>
> Key: PHOENIX-4861
> URL: https://issues.apache.org/jira/browse/PHOENIX-4861
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Major
>
> For tables that use column encoding when we add a column to a view we need to 
> update the encoded column qualifier counter on the base table. Currently we 
> do this in two rpcs:
> {code}
> // there should only be remote mutations if we are 
> creating a view that uses
> // encoded column qualifiers (the remote mutations are to 
> update the encoded
> // column qualifier counter on the parent table)
> if (parentTable != null && tableType == PTableType.VIEW 
> && parentTable
> .getEncodingScheme() != 
> QualifierEncodingScheme.NON_ENCODED_QUALIFIERS) {
> response =
> processRemoteRegionMutations(
> 
> PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,
> remoteMutations, 
> MetaDataProtos.MutationCode.UNABLE_TO_UPDATE_PARENT_TABLE);
> clearParentTableFromCache(clientTimeStamp,
> parentTable.getSchemaName() != null
> ? parentTable.getSchemaName().getBytes()
> : 

[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v4.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch, PHOENIX-4893-4.x-HBase-1.3-v3.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-13 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v3.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch, PHOENIX-4893-4.x-HBase-1.3-v3.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3534:

Attachment: (was: PHOENIX-4893-4.x-HBase-1.3-v2.patch)

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-3534-v2.patch, PHOENIX-3534-v3.patch, 
> PHOENIX-3534.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v2.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch, 
> PHOENIX-4893-4.x-HBase-1.3-v2.patch
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3534:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v2.patch

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-3534-v2.patch, PHOENIX-3534-v3.patch, 
> PHOENIX-3534.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (PHOENIX-5295) Local Index data not replicating for older HBase versions (<= HBase 1.2)

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5295:
---

Assignee: Hieu Nguyen  (was: Hieu Nguyen)

> Local Index data not replicating for older HBase versions (<= HBase 1.2)
> 
>
> Key: PHOENIX-5295
> URL: https://issues.apache.org/jira/browse/PHOENIX-5295
> Project: Phoenix
>  Issue Type: Bug
> Environment: Branch 4.14-cdh5.11
>Reporter: Hieu Nguyen
>Assignee: Hieu Nguyen
>Priority: Major
> Attachments: PHOENIX-5295.4.14-cdh5.11.v1.patch, 
> PHOENIX-5295.4.14-cdh5.11.v1.patch, PHOENIX-5295.4.14-cdh5.11.v2.patch, 
> PHOENIX-5295.4.14-cdh5.11.v3.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Copied from email thread 
> https://lists.apache.org/thread.html/7ab1d9489eca2ab2b974948fbe60b143fda432ef7dfc603528d460f2@%3Cuser.phoenix.apache.org%3E.
> ---
> We are on Phoenix 4.14-cdh5.11.  We are experiencing an issue where local 
> index data is not being replicated through HBase replication.  As suggested 
> in a previous email thread 
> (https://lists.apache.org/thread.html/984fba3c8abd944846deefb3ea285195e0436b9181b9779feac39b59@%3Cuser.phoenix.apache.org%3E),
>  we have enabled replication for the local indexes (the "L#0" column family 
> on the same table).  We wrote an integration test to demonstrate this issue 
> on top of 4.14-cdh5.11 branch 
> (https://github.com/hnguyen08/phoenix/commit/3589cb45d941c6909fb3deb5f5abb0f8dfa78dd7).
> After some investigation and debugging, we discovered the following:
> 1. Commit a2f4d7eebec621b58204a9eb78d552f18dcbcf24 (PHOENIX-3827) fixed the 
> issue, but only in Phoenix for HBase1.3+.  It uses the 
> miniBatchOp.addOperationsFromCP() API introduced in HBase1.3.  Unfortunately, 
> for the time being, we are stuck on cdh5.11 (based on HBase1.2).
> 2. IndexUtil.writeLocalUpdates() is called in both implementations of 
> IndexCommitter, both taking skipWAL=true.  It seems like we'd actually want 
> to not skip WAL to ensure that local-index updates are replicated correctly 
> (since, as mentioned in the above email thread, "HBase-level replication of 
> the data table will not trigger index updates").  After changing the skipWAL 
> flag to false, the above integration test passes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (PHOENIX-5295) Local Index data not replicating for older HBase versions (<= HBase 1.2)

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5295:
---

Assignee: Hieu Nguyen

> Local Index data not replicating for older HBase versions (<= HBase 1.2)
> 
>
> Key: PHOENIX-5295
> URL: https://issues.apache.org/jira/browse/PHOENIX-5295
> Project: Phoenix
>  Issue Type: Bug
> Environment: Branch 4.14-cdh5.11
>Reporter: Hieu Nguyen
>Assignee: Hieu Nguyen
>Priority: Major
> Attachments: PHOENIX-5295.4.14-cdh5.11.v1.patch, 
> PHOENIX-5295.4.14-cdh5.11.v1.patch, PHOENIX-5295.4.14-cdh5.11.v2.patch, 
> PHOENIX-5295.4.14-cdh5.11.v3.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Copied from email thread 
> https://lists.apache.org/thread.html/7ab1d9489eca2ab2b974948fbe60b143fda432ef7dfc603528d460f2@%3Cuser.phoenix.apache.org%3E.
> ---
> We are on Phoenix 4.14-cdh5.11.  We are experiencing an issue where local 
> index data is not being replicated through HBase replication.  As suggested 
> in a previous email thread 
> (https://lists.apache.org/thread.html/984fba3c8abd944846deefb3ea285195e0436b9181b9779feac39b59@%3Cuser.phoenix.apache.org%3E),
>  we have enabled replication for the local indexes (the "L#0" column family 
> on the same table).  We wrote an integration test to demonstrate this issue 
> on top of 4.14-cdh5.11 branch 
> (https://github.com/hnguyen08/phoenix/commit/3589cb45d941c6909fb3deb5f5abb0f8dfa78dd7).
> After some investigation and debugging, we discovered the following:
> 1. Commit a2f4d7eebec621b58204a9eb78d552f18dcbcf24 (PHOENIX-3827) fixed the 
> issue, but only in Phoenix for HBase1.3+.  It uses the 
> miniBatchOp.addOperationsFromCP() API introduced in HBase1.3.  Unfortunately, 
> for the time being, we are stuck on cdh5.11 (based on HBase1.2).
> 2. IndexUtil.writeLocalUpdates() is called in both implementations of 
> IndexCommitter, both taking skipWAL=true.  It seems like we'd actually want 
> to not skip WAL to ensure that local-index updates are replicated correctly 
> (since, as mentioned in the above email thread, "HBase-level replication of 
> the data table will not trigger index updates").  After changing the skipWAL 
> flag to false, the above integration test passes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (PHOENIX-5136) Rows with null values inserted by UPSERT .. ON DUPLICATE KEY UPDATE are included in query results when they shouldn't be

2019-07-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5136:
---

Assignee: Miles Spielberg

> Rows with null values inserted by UPSERT .. ON DUPLICATE KEY UPDATE are 
> included in query results when they shouldn't be
> 
>
> Key: PHOENIX-5136
> URL: https://issues.apache.org/jira/browse/PHOENIX-5136
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Hieu Nguyen
>Assignee: Miles Spielberg
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Rows with null values inserted using UPSERT .. ON DUPLICATE KEY UPDATE will 
> be selected in queries when they should not be.
> Here is a failing test that demonstrates the issue:
> {noformat}
> @Test
> public void 
> testRowsCreatedViaUpsertOnDuplicateKeyShouldNotBeReturnedInQueryIfNotMatched()
>  throws Exception {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> String tableName = generateUniqueName();
> String ddl = " create table " + tableName + "(pk varchar primary key, 
> counter1 bigint, counter2 smallint)";
> conn.createStatement().execute(ddl);
> createIndex(conn, tableName);
> // The data has to be specifically starting with null for the first 
> counter to fail the test. If you reverse the values, the test passes.
> String dml1 = "UPSERT INTO " + tableName + " VALUES('a',NULL,2) ON 
> DUPLICATE KEY UPDATE " +
> "counter1 = CASE WHEN (counter1 IS NULL) THEN NULL ELSE counter1 
> END, " +
> "counter2 = CASE WHEN (counter1 IS NULL) THEN 2 ELSE counter2 
> END";
> conn.createStatement().execute(dml1);
> conn.commit();
> String dml2 = "UPSERT INTO " + tableName + " VALUES('b',1,2) ON DUPLICATE 
> KEY UPDATE " +
> "counter1 = CASE WHEN (counter1 IS NULL) THEN 1 ELSE counter1 
> END, " +
> "counter2 = CASE WHEN (counter1 IS NULL) THEN 2 ELSE counter2 
> END";
> conn.createStatement().execute(dml2);
> conn.commit();
> // Using this statement causes the test to pass
> //ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName + " WHERE counter2 = 2 AND counter1 = 1");
> // This statement should be equivalent to the one above, but it selects 
> both rows.
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName + " WHERE counter2 = 2 AND (counter1 = 1 OR counter1 = 1)");
> assertTrue(rs.next());
> assertEquals("b",rs.getString(1));
> assertEquals(1,rs.getLong(2));
> assertEquals(2,rs.getLong(3));
> assertFalse(rs.next());
> conn.close();
> }{noformat}
> The conditions are fairly specific:
>  * Must use ON DUPLICATE KEY UPDATE.  Inserting rows using UPSERT by itself 
> will have correct results
>  * The "counter2 = 2 AND (counter1 = 1 OR counter1 = 1)" condition caused the 
> test to fail, as opposed to the equivalent but simpler "counter2 = 2 AND 
> counter1 = 1".  I tested a similar "counter2 = 2 AND (counter1 = 1 OR 
> counter1 < 1)", which also caused the test to fail.
>  * If the NULL value for row 'a' is instead in the last position (counter2), 
> then row 'a' is not selected in the query as expected.  The below test 
> demonstrates this behavior (it passes as expected):
> {noformat}
> @Test
> public void 
> testRowsCreatedViaUpsertOnDuplicateKeyShouldNotBeReturnedInQueryIfNotMatched()
>  throws Exception {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> String tableName = generateUniqueName();
> String ddl = " create table " + tableName + "(pk varchar primary key, 
> counter1 bigint, counter2 smallint)";
> conn.createStatement().execute(ddl);
> createIndex(conn, tableName);
> String dml1 = "UPSERT INTO " + tableName + " VALUES('a',1,NULL) ON 
> DUPLICATE KEY UPDATE " +
> "counter1 = CASE WHEN (counter1 IS NULL) THEN 1 ELSE counter1 
> END, " +
> "counter2 = CASE WHEN (counter1 IS NULL) THEN NULL ELSE counter2 
> END";
> conn.createStatement().execute(dml1);
> conn.commit();
> String dml2 = "UPSERT INTO " + tableName + " VALUES('b',1,2) ON DUPLICATE 
> KEY UPDATE " +
> "counter1 = CASE WHEN (counter1 IS NULL) THEN 1 ELSE counter1 
> END, " +
> "counter2 = CASE WHEN (counter1 IS NULL) THEN 2 ELSE counter2 
> END";
> conn.createStatement().execute(dml2);
> conn.commit();
> ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
> tableName + " WHERE counter1 = 

[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-07-11 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Attachment: PHOENIX-4893-4.x-HBase-1.3-v1.patch

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4893-4.x-HBase-1.3-v1.patch
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (PHOENIX-5383) Metrics for the IndexRegionObserver coprocesor

2019-07-05 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5383:
---

Assignee: Swaroopa Kadam

> Metrics for the IndexRegionObserver coprocesor
> --
>
> Key: PHOENIX-5383
> URL: https://issues.apache.org/jira/browse/PHOENIX-5383
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Priyank Porwal
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
>
> Need to track index write failures in phase1 and phase3 after the index 
> re-design done as part of PHOENIX-5156 and -PHOENIX-5211.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-06-27 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4893:
---

Assignee: Thomas D'Silva

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4810) Send parent->child link mutations to SYSTEM.CHILD_LINK table in MetdataClient.createTableInternal

2019-06-26 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-4810:
-

I think we will have to make a rpc to an endpoint coprocessor on the region 
hosting SYSTEM.CHILD_LINK instead of directly writing to SYSTEM.CHILD_LINK from 
the client.

> Send parent->child link mutations to SYSTEM.CHILD_LINK table in 
> MetdataClient.createTableInternal 
> --
>
> Key: PHOENIX-4810
> URL: https://issues.apache.org/jira/browse/PHOENIX-4810
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Major
>
> Instead of sending the parent->child link mutations to the 
> MetadataEndpointImpl.createTable write them directly to SYSTEM.CHILD_LINK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5366) Perform splittable SYSCAT actions on the client only

2019-06-26 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5366:

Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-3534

> Perform splittable SYSCAT actions on the client only
> 
>
> Key: PHOENIX-5366
> URL: https://issues.apache.org/jira/browse/PHOENIX-5366
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
>
> With Splittable Syscat we reintroduced new server-to-server communication. We 
> should this all from the client. [~tdsilva].
> Also, sometimes when I run tests they hang. In that case the logs are filled 
> with exceptions like these:
> {code:java}
> 2019-06-23 19:21:05,293 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45747] 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(344): Possibly 
> transient ZooKeeper, quorum=localhost:54696, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/meta-region-server
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1212)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:434)
> at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:672)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:487)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:168)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:607)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
> at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
> at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1268)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1229)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:357)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:231)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:272)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:433)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:307)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1343)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1232)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:1063)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:980)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$200(AsyncProcess.java:667)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:649)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:612)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:922)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:939)
> at org.apache.hadoop.hbase.client.HTableWrapper.batch(HTableWrapper.java:255)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.processRemoteRegionMutations(MetaDataEndpointImpl.java:2757)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2381)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
> at 

[jira] [Updated] (PHOENIX-5303) Fix index failures with some versions of HBase.

2019-06-20 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5303:

Fix Version/s: 4.14.3

> Fix index failures with some versions of HBase.
> ---
>
> Key: PHOENIX-5303
> URL: https://issues.apache.org/jira/browse/PHOENIX-5303
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
> Attachments: 5303.txt
>
>
> The problem was introduced with HBASE-21158. The fix here works regardless of 
> the HBase version.
> This must have started very recently, but it's already past the history of 
> the test runs.
> Or perhaps it never works in 4.x-HBase-1.5
> [~apurtell], in case you have any ideas.
> {code:java}
> [INFO] Running 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.403 
> s <<< FAILURE! - in 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec
> [ERROR] 
> testGeneratedIndexUpdates(org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec)
>  Time elapsed: 0.16 s <<< FAILURE!
> java.lang.AssertionError: Had some index updates, though it should have been 
> covered by the delete
> at 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.ensureNoUpdatesWhenCoveredByDelete(TestCoveredColumnIndexCodec.java:242)
> at 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.testGeneratedIndexUpdates(TestCoveredColumnIndexCodec.java:220)
> {code}
>  
> MutableIndexIT fails as well (for non-transactional indexes)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5269) PhoenixAccessController should use AccessChecker instead of AccessControlClient for permission checks

2019-06-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5269:

Attachment: diff.patch

> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks
> -
>
> Key: PHOENIX-5269
> URL: https://issues.apache.org/jira/browse/PHOENIX-5269
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1, 4.14.2
>Reporter: Andrew Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Critical
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5269-4.14-HBase-1.4.patch, 
> PHOENIX-5269-4.14-HBase-1.4.v1.patch, PHOENIX-5269-4.14-HBase-1.4.v2.patch, 
> PHOENIX-5269.4.14-HBase-1.4.v3.patch, PHOENIX-5269.4.14-HBase-1.4.v4.patch, 
> PHOENIX-5269.4.x-HBase-1.4.v1.patch, PHOENIX-5269.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5269.master.v1.patch, diff.patch
>
>
> PhoenixAccessController should use AccessChecker instead of 
> AccessControlClient for permission checks. 
> In HBase, every RegionServer's AccessController maintains a local cache of 
> permissions. At startup time they are initialized from the ACL table. 
> Whenever the ACL table is changed (via grant or revoke) the AC on the ACL 
> table "broadcasts" the change via zookeeper, which updates the cache. This is 
> performed and managed by TableAuthManager but is exposed as API by 
> AccessChecker. AccessChecker is the result of a refactor that was committed 
> as far back as branch-1.4 I believe.
> Phoenix implements its own access controller and is using the client API 
> AccessControlClient instead. AccessControlClient does not cache nor use the 
> ZK-based cache update mechanism, because it is designed for client side use.
> The use of AccessControlClient instead of AccessChecker is not scalable. 
> Every permissions check will trigger a remote RPC to the ACL table, which is 
> generally going to be a single region hosted on a single RegionServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5335) Fix ViewIT

2019-06-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5335:

Fix Version/s: 5.1.0
   4.15.0

> Fix ViewIT
> --
>
> Key: PHOENIX-5335
> URL: https://issues.apache.org/jira/browse/PHOENIX-5335
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5335) Fix ViewIT

2019-06-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5335:
---

Assignee: Thomas D'Silva

> Fix ViewIT
> --
>
> Key: PHOENIX-5335
> URL: https://issues.apache.org/jira/browse/PHOENIX-5335
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5318) Slots passed to SkipScan filter is incorrect for desc primary keys that are prefixes of each other

2019-06-12 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5318:

Labels: DESC  (was: )

> Slots passed to SkipScan filter is incorrect for desc primary keys that are 
> prefixes of each other
> --
>
> Key: PHOENIX-5318
> URL: https://issues.apache.org/jira/browse/PHOENIX-5318
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0, 4.14.3
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>  Labels: DESC
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
> Attachments: PHOENIX-5318-4.x-HBase-1.3.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {code}
> CREATE VIEW IF NOT EXISTS CUSTOM_ENTITY."z01" (COL1 VARCHAR, COL2 VARCHAR, 
> COL3 VARCHAR, COL4 VARCHAR CONSTRAINT PK PRIMARY KEY (COL1 DESC, COL2 DESC, 
> COL3 DESC, COL4 DESC)) AS SELECT * FROM 
> CUSTOM_ENTITY.CUSTOM_ENTITY_DATA_NO_ID WHERE KEY_PREFIX = 'z01'; 
>  
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('8', 'blah', 
> 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('6', 'blah', 
> 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('23', 
> 'blah', 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('17', 
> 'blah', 'blah', 'blah');
>  
> SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
> (COL1='1' OR COL1='2' OR COL1='3' OR COL1='4' OR COL1='5' OR COL1='6' OR 
> COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'
>  
> +---+---+---+---+
> | COL1 | COL2 | COL3 | COL4 |
> +---+---+---+---+
> | 8   | blah | blah | blah |
> | 6   | blah | blah | blah |
> +---+---+---+---+
>  
> SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
> (COL1='6'OR COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'
>  
> +---+---+---+---+
> | COL1 | COL2 | COL3 | COL4 |
> +---+---+---+---+
> | 8   | blah | blah | blah |
> | 6   | blah | blah | blah |
> | 23  | blah | blah | blah |
> | 17  | blah | blah | blah |
> +---+---+---+---
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-06-07 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5122:

Fix Version/s: (was: 5.0.1)

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, 
> PHOENIX-5122-4.x-HBase-1.3_addendum.patch, PHOENIX-5122-addendum-tests.zip, 
> PHOENIX-5122.patch, Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 
> 2019-03-04 at 6.21.10 PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5318) Slots passed to SkipScan filter is incorrect for desc primary keys that are prefixes of each other

2019-06-06 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5318:

Attachment: PHOENIX-5318-4.x-HBase-1.3.patch

> Slots passed to SkipScan filter is incorrect for desc primary keys that are 
> prefixes of each other
> --
>
> Key: PHOENIX-5318
> URL: https://issues.apache.org/jira/browse/PHOENIX-5318
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0, 4.14.3
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Attachments: PHOENIX-5318-4.x-HBase-1.3.patch
>
>
> {code}
> CREATE VIEW IF NOT EXISTS CUSTOM_ENTITY."z01" (COL1 VARCHAR, COL2 VARCHAR, 
> COL3 VARCHAR, COL4 VARCHAR CONSTRAINT PK PRIMARY KEY (COL1 DESC, COL2 DESC, 
> COL3 DESC, COL4 DESC)) AS SELECT * FROM 
> CUSTOM_ENTITY.CUSTOM_ENTITY_DATA_NO_ID WHERE KEY_PREFIX = 'z01'; 
>  
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('8', 'blah', 
> 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('6', 'blah', 
> 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('23', 
> 'blah', 'blah', 'blah');
> UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('17', 
> 'blah', 'blah', 'blah');
>  
> SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
> (COL1='1' OR COL1='2' OR COL1='3' OR COL1='4' OR COL1='5' OR COL1='6' OR 
> COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'
>  
> +---+---+---+---+
> | COL1 | COL2 | COL3 | COL4 |
> +---+---+---+---+
> | 8   | blah | blah | blah |
> | 6   | blah | blah | blah |
> +---+---+---+---+
>  
> SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
> (COL1='6'OR COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'
>  
> +---+---+---+---+
> | COL1 | COL2 | COL3 | COL4 |
> +---+---+---+---+
> | 8   | blah | blah | blah |
> | 6   | blah | blah | blah |
> | 23  | blah | blah | blah |
> | 17  | blah | blah | blah |
> +---+---+---+---
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5318) Slots passed to SkipScan filter is incorrect for desc primary keys that are prefixes of each other

2019-06-06 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5318:
---

 Summary: Slots passed to SkipScan filter is incorrect for desc 
primary keys that are prefixes of each other
 Key: PHOENIX-5318
 URL: https://issues.apache.org/jira/browse/PHOENIX-5318
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0, 5.1.0, 4.14.3
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva


{code}
CREATE VIEW IF NOT EXISTS CUSTOM_ENTITY."z01" (COL1 VARCHAR, COL2 VARCHAR, COL3 
VARCHAR, COL4 VARCHAR CONSTRAINT PK PRIMARY KEY (COL1 DESC, COL2 DESC, COL3 
DESC, COL4 DESC)) AS SELECT * FROM CUSTOM_ENTITY.CUSTOM_ENTITY_DATA_NO_ID WHERE 
KEY_PREFIX = 'z01'; 

 

UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('8', 'blah', 
'blah', 'blah');

UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('6', 'blah', 
'blah', 'blah');

UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('23', 'blah', 
'blah', 'blah');

UPSERT INTO CUSTOM_ENTITY."z01" (COL1, COL2, COL3, COL4) VALUES ('17', 'blah', 
'blah', 'blah');

 

SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
(COL1='1' OR COL1='2' OR COL1='3' OR COL1='4' OR COL1='5' OR COL1='6' OR 
COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'

 

+---+---+---+---+

| COL1 | COL2 | COL3 | COL4 |

+---+---+---+---+

| 8   | blah | blah | blah |

| 6   | blah | blah | blah |

+---+---+---+---+

 

SELECT COL1, COL2, COL3, COL4 FROM CUSTOM_ENTITY."z01" WHERE COL4='blah' AND 
(COL1='6'OR COL1='8' OR COL1='17' OR COL1='12' OR COL1='23') AND COL3='blah'

 

+---+---+---+---+

| COL1 | COL2 | COL3 | COL4 |

+---+---+---+---+

| 8   | blah | blah | blah |

| 6   | blah | blah | blah |

| 23  | blah | blah | blah |

| 17  | blah | blah | blah |

+---+---+---+---
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5311) Integration tests leak tables when running on distributed cluster

2019-06-06 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-5311:
---

Assignee: István Tóth

> Integration tests leak tables when running on distributed cluster
> -
>
> Key: PHOENIX-5311
> URL: https://issues.apache.org/jira/browse/PHOENIX-5311
> Project: Phoenix
>  Issue Type: Bug
>Reporter: István Tóth
>Assignee: István Tóth
>Priority: Major
> Attachments: PHOENIX-5311.master.v1.patch
>
>
> When integration test suite is run via End2EndTestDriver on a distributed 
> cluster, most tests do not clean up their tables, leaving thousands of tables 
> on the cluster, and exhausting RegionServer memory.
> There are actually three problems:
>  * The BaseTest.freeResourcesIfBeyondThreshold() method is called after most 
> tests, and it restarts the MiniCluster, thus freeing resources, but it has no 
> effect when running on a distributed cluster.
>  * The TestDriver sets phoenix.schema.dropMetaData to false by default, so 
> even if the Phoenix tables are dropped, the HBASE tables are not, so the 
> table leak remains. 
>  * The phoenix.schema.dropMetaData setting cannot be easily overridden 
> because of PHOENIX-5310
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5316) Use callable instead of runnable so that Pherf exceptions cause tests to fail

2019-06-04 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5316:

Attachment: PHOENIX-5316-4.x-HBase-1.3.patch

> Use callable instead of runnable so that Pherf exceptions cause tests to fail
> -
>
> Key: PHOENIX-5316
> URL: https://issues.apache.org/jira/browse/PHOENIX-5316
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>  Labels: pherf
> Attachments: PHOENIX-5316-4.x-HBase-1.3.patch
>
>
> Also add support for BIGINT and TINYINT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5316) Use callable instead of runnable so that Pherf exceptions cause tests to fail

2019-06-04 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5316:
---

 Summary: Use callable instead of runnable so that Pherf exceptions 
cause tests to fail
 Key: PHOENIX-5316
 URL: https://issues.apache.org/jira/browse/PHOENIX-5316
 Project: Phoenix
  Issue Type: Improvement
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva


Also add support for BIGINT and TINYINT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4993.
---

Bulk closing jiras for the 4.14.2 relase.

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4864) Fix NullPointerException while Logging some DDL Statements

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4864.
---

Bulk closing jiras for the 4.14.2 relase.

> Fix NullPointerException while Logging some DDL Statements
> --
>
> Key: PHOENIX-4864
> URL: https://issues.apache.org/jira/browse/PHOENIX-4864
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ashutosh Parekh
>Assignee: Ashutosh Parekh
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4864.patch
>
>
> We encounter a NullPointerException when ResultSet is null when some type of 
> DDL queries are executed. The following error is encountered.
> java.lang.NullPointerException: null
>  at 
> org.apache.phoenix.jdbc.LoggingPhoenixResultSet.close(LoggingPhoenixResultSet.java:40)
>  at 
> org.apache.calcite.avatica.jdbc.JdbcMeta$StatementExpiryHandler.onRemoval(JdbcMeta.java:1105)
>  at 
> com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1963)
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5267) With namespaces enabled Phoenix client times out with high loads

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5267.
---

Bulk closing jiras for the 4.14.2 relase.

> With namespaces enabled Phoenix client times out with high loads
> 
>
> Key: PHOENIX-5267
> URL: https://issues.apache.org/jira/browse/PHOENIX-5267
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.14.2
>
>
> Steps to reproduce:
>  * Enable namespaces for phoenix 4.14.1 and hbase 1.3
>  * Run high load using pherf client with 48 threads
> After sometime the client hangs. and gives timeout exception
> {code:java}
> [pool-1-thread-1] WARN org.apache.phoenix.pherf.workload.WriteWorkload -
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
> callDuration=1238263: Call to  failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, 
> waitTime=120001, operationTimeout=12 expired. row '^@TEST^@TABLE' on 
> table 'SYSTEM:CATALOG' at 
> region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
> hostname=, seqNum=2
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:239)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:189)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:165)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.phoenix.exception.PhoenixIOException: 
> callTimeout=120, callDuration=1238263: Call to  failed on local 
> exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=857, 
> waitTime=120001, operationTimeout=12 expired. row '^@TEST^@TABLE' on 
> table 'SYSTEM:CATALOG' at 
> region=SYSTEM:CATALOG,1556024429507.0f80d6de0a002d1421b8fd384e956254., 
> hostname=, seqNum=2
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1379)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1343)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1560)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:644)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:538)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:530)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:526)
> at 
> org.apache.phoenix.execute.MutationState.validateAndGetServerTimestamp(MutationState.java:755)
> at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:743)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:875)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1360)
> at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1183)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-3991) ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting without providing a value.

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-3991.
---

Bulk closing jiras for the 4.14.2 relase.

> ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting 
> without providing a value.
> ---
>
> Key: PHOENIX-3991
> URL: https://issues.apache.org/jira/browse/PHOENIX-3991
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eric Belanger
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-3991-1.patch
>
>
> {code:sql}
> CREATE TABLE TEST (
>   CREATED TIMESTAMP NOT NULL,
>   ID CHAR(36) NOT NULL,
>   DEFINITION VARCHAR,
>   CONSTRAINT TEST_PK PRIMARY KEY (CREATED ROW_TIMESTAMP, ID)
> )
> -- WORKS
> UPSERT INTO TEST (CREATED, ID, DEFINITION) VALUES (NOW(), 'A', 'DEFINITION 
> A');
> -- ArrayOutOfBoundException
> UPSERT INTO TEST (ID, DEFINITION) VALUES ('A', 'DEFINITION A');
> {code}
> Stack Trace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>   at 
> org.apache.phoenix.execute.MutationState.getNewRowKeyWithRowTimestamp(MutationState.java:554)
>   at 
> org.apache.phoenix.execute.MutationState.generateMutations(MutationState.java:640)
>   at 
> org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:572)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1003)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4900) Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit on for deletes

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4900.
---

Bulk closing jiras for the 4.14.2 relase.

> Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED 
> exception message to recommend turning autocommit on for deletes
> ---
>
> Key: PHOENIX-4900
> URL: https://issues.apache.org/jira/browse/PHOENIX-4900
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-4900-4.x-HBase-1.4.patch, PHOENIX-4900.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5084) Changes from Transactional Tables are not visible to query in different client

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5084.
---

Bulk closing jiras for the 4.14.2 relase.

> Changes from Transactional Tables are not visible to query in different client
> --
>
> Key: PHOENIX-5084
> URL: https://issues.apache.org/jira/browse/PHOENIX-5084
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5084-v2.txt, PHOENIX-5084-v3.txt, 
> PHOENIX-5084-v4.txt, PHOENIX-5084.txt
>
>
> Scenario:
> # Upsert and commit some data into a transactional table. (Autocommit or 
> following by explicit commit)
> # Query same table from another client
> The first query on the other client will not see the newly upserted/committed 
> data (regardless of how long one waits).
> A second identical query will see the new data.
> This happens with both Omid and Tephra.
> I guess we can't write a test for this, since it requires multiple JVMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5247) DROP TABLE and DROP VIEW commands fail to drop second or higher level child views

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5247.
---

Bulk closing jiras for the 4.14.2 relase.

> DROP TABLE and DROP VIEW commands fail to drop second or higher level child 
> views
> -
>
> Key: PHOENIX-5247
> URL: https://issues.apache.org/jira/browse/PHOENIX-5247
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.14.2
>
> Attachments: PHOENIX-5247.4.14-HBase-1.2.001.patch, 
> PHOENIX-5247.4.14.1-HBase-1.2.001.patch
>
>
> We have seen large number of orphan views in our production environments. The 
> method (doDropTable) that is used to drop tables and views drops only the 
> first level child views of tables. This seems to be the main root cause for 
> orphan views. doDropTable() is recursive only when the table type is TABLE or 
> SYSTEM. The table type for views is VIEW. The findChildViews method returns 
> the first level child views. So, doDropTable ignores dropping views of views 
> (i.e., second or higher level views).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5188) IndexedKeyValue should populate KeyValue fields

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5188.
---

Bulk closing jiras for the 4.14.2 relase.

> IndexedKeyValue should populate KeyValue fields
> ---
>
> Key: PHOENIX-5188
> URL: https://issues.apache.org/jira/browse/PHOENIX-5188
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5188-4.x-HBase-1.4..addendum.patch, 
> PHOENIX-5188-4.x-HBase-1.4.patch, PHOENIX-5188.patch
>
>
> IndexedKeyValue subclasses the HBase KeyValue class, which has three primary 
> fields: bytes, offset, and length. These fields aren't populated by 
> IndexedKeyValue because it's concerned with index mutations, and has its own 
> fields that its own methods use. 
> However, KeyValue and its Cell interface have quite a few methods that assume 
> these fields are populated, and the HBase-level factory methods generally 
> ensure they're populated. Phoenix code should do the same, to maintain the 
> polymorphic contract. This is important in cases like custom 
> ReplicationEndpoints where HBase-level code may be iterating over WALEdits 
> that contain both KeyValues and IndexKeyValues and may need to interrogate 
> their contents. 
> Since the index mutation has a row key, this is straightforward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4853) Add sql statement to PhoenixMetricsLog interface for query level metrics logging

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4853.
---

Bulk closing jiras for the 4.14.2 relase.

> Add sql statement to PhoenixMetricsLog interface for query level metrics 
> logging
> 
>
> Key: PHOENIX-4853
> URL: https://issues.apache.org/jira/browse/PHOENIX-4853
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> We get query level metrics when we try to close the 
> {{LoggingPhoenixResultSet}} object. It is better to add the SQL statement to 
> the PhoenixMetricsLog interface so that we can attach the metrics to the 
> exact SQL statement. This helps in debugging whenever we determine that 
> particular query is taking a long time to run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5291) Ensure that Phoenix coprocessor close all scanners.

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5291.
---

Bulk closing jiras for the 4.14.2 relase.

> Ensure that Phoenix coprocessor close all scanners.
> ---
>
> Key: PHOENIX-5291
> URL: https://issues.apache.org/jira/browse/PHOENIX-5291
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: 5291-v2.txt, 5291-v3-master.txt, 5291-v3.txt, 5291.txt
>
>
> With HBase 1.5 and later this is a disaster, as it causes the wrong reference 
> counting of HFiles in HBase, and those subsequently will *never* be removed 
> until the region closes and reopens for any reason.
> We found at least two cases... See comments below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5169) Query logger is still initialized for each query when the log level is off

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5169.
---

Bulk closing jiras for the 4.14.2 relase.

> Query logger is still initialized for each query when the log level is off
> --
>
> Key: PHOENIX-5169
> URL: https://issues.apache.org/jira/browse/PHOENIX-5169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5169-master-v2.patch, 
> PHOENIX-5169-master-v3.patch, PHOENIX-5169-master-v4.patch, 
> PHOENIX-5169-master.patch, image-2019-02-28-10-05-00-518.png
>
>
> we will still invoke createQueryLogger in PhoenixStatement for each query 
> when query logger level is OFF, which has significant throughput impacts 
> under multiple threads.
> The below is jstack with the concurrent query:
> !https://gw.alicdn.com/tfscom/TB1HC3bI4TpK1RjSZFMXXbG_VXa.png|width=500,height=400!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5073) Invalid PIndexState during rolling upgrade from 4.13 to 4.14

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5073.
---

Bulk closing jiras for the 4.14.2 relase.

> Invalid PIndexState during rolling upgrade from 4.13 to 4.14
> 
>
> Key: PHOENIX-5073
> URL: https://issues.apache.org/jira/browse/PHOENIX-5073
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5073-4.14-HBase-1.3.01.patch, 
> PHOENIX-5073-4.x-HBase-1.3.001.patch, PHOENIX-5073-4.x-HBase-1.3.002.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.1.patch, 
> PHOENIX-5073-4.x-HBase-1.3.003.2.patch, PHOENIX-5073-4.x-HBase-1.3.003.patch, 
> PHOENIX-5073-master-01.patch
>
>
> While doing a rolling upgrade from 4.13 to 4.14 we are seeing this exception. 
> {code:java}
> 2018-08-20 09:00:34,980 WARN  [pool-1-thread-1] workload.WriteWorkload - 
> java.util.concurrent.ExecutionException: java.sql.SQLException: 
> java.lang.IllegalArgumentException: Unable to PIndexState enum for serialized 
> value of 'w'
>     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.waitForBatches(WriteWorkload.java:233)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.exec(WriteWorkload.java:183)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload.access$100(WriteWorkload.java:56)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$1.run(WriteWorkload.java:159)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: java.lang.IllegalArgumentException: Unable 
> to PIndexState enum for serialized value of 'w'
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1322)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1284)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1501)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:581)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:504)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:496)
>     at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:492)
>     at 
> org.apache.phoenix.execute.MutationState.validate(MutationState.java:780)
>     at 
> org.apache.phoenix.execute.MutationState.validateAll(MutationState.java:768)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:980)
>     at org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>     at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:291)
>     at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:250)
>     ... 4 more
> Caused by: java.lang.IllegalArgumentException: Unable to PIndexState enum for 
> serialized value of 'w'
>     at 
> org.apache.phoenix.schema.PIndexState.fromSerializedValue(PIndexState.java:81)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1222)
>     at 
> org.apache.phoenix.schema.PTableImpl.createFromProto(PTableImpl.java:1246)
>     at 
> org.apache.phoenix.coprocessor.MetaDataProtocol$MetaDataMutationResult.constructFromProto(MetaDataProtocol.java:330)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1314){code}
>  
> Steps to reproduce.
>  # Start the server on 4.14
>  # Start load with both 4.13 and 4.14 clients
>  # 4.13 client will show the above error (it will only when Index state 
> transtition to PENDING_DISABLE , this state is not defined in 4.13) 
>  



--
This message was sent by Atlassian JIRA

[jira] [Closed] (PHOENIX-4854) Make LoggingPhoenixResultSet idempotent when logging metrics

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4854.
---

Bulk closing jiras for the 4.14.2 relase.

> Make LoggingPhoenixResultSet idempotent when logging metrics
> 
>
> Key: PHOENIX-4854
> URL: https://issues.apache.org/jira/browse/PHOENIX-4854
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> ResultSet close method can be called multiple times and LoggingResultSet 
> object tries to call PhoenixMetricsLog methods every single time. These per 
> query metrics don't get cleared up, rather they are all at "0" value once 
> they are consumed and reset. This Jira is an enhancement to bring the 
> idempotency in the class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5246.
---

Bulk closing jiras for the 4.14.2 relase.

> PhoenixAccessControllers.getAccessControllers() method is not correctly 
> implementing the double-checked locking
> ---
>
> Key: PHOENIX-5246
> URL: https://issues.apache.org/jira/browse/PHOENIX-5246
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5246.4.x-HBase-1.3.v1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> By [~elserj] on PHOENIX-5070: 
> This looks to me that the getAccessControllers() method is not correctly 
> implementing the double-checked locking "approach" as per 
> https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
> accessControllers variable must be volatile).
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5026) Add client setting to disable server side mutations

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5026.
---

Bulk closing jiras for the 4.14.2 relase.

> Add client setting to disable server side mutations
> ---
>
> Key: PHOENIX-5026
> URL: https://issues.apache.org/jira/browse/PHOENIX-5026
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 4.14.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: 5026-withtests-v2.txt, 5026-withtests.txt, 5026.txt
>
>
> Like PHOENIX-3818 server side deletes.
> We've seen issues with larger deletes (see PHOENIX-5007).
> In many case it is probably better to handle deletes from the client. That 
> way requests are properly chunked, handler threads are not tied up, and 
> there's no "funniness" with issues mutation from a scan RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4822) The configuration "phoenix.query.dateFormatTimeZone" does't work on the client

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4822.
---

Bulk closing jiras for the 4.14.2 relase.

> The configuration "phoenix.query.dateFormatTimeZone" does't work on the client
> --
>
> Key: PHOENIX-4822
> URL: https://issues.apache.org/jira/browse/PHOENIX-4822
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0, 4.13.0, 4.14.0, 5.0.0
>Reporter: jaanai
>Assignee: jaanai
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4822.patch, PHOENIX-4822_4.14.0-HBase-1.4.patch, 
> PHOENIX-4822_5.x-HBase-2.0.patch, PHOENIX-4822_master.patch, 
> PHOENIX-4822_v2.patch, PHOENIX-4822_v3.patch
>
>
> when add configuration "phoenix.query.dateFormatTimeZone" into hbase-site.xml 
> or  Properites of Connection, it can not work still uses time zone with GTM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5070) NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5070.
---

Bulk closing jiras for the 4.14.2 relase.

> NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in 
> secure setup
> -
>
> Key: PHOENIX-5070
> URL: https://issues.apache.org/jira/browse/PHOENIX-5070
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5070-4.x-HBase-1.3.01.patch, 
> PHOENIX-5070-4.x-HBase-1.3.02.patch, PHOENIX-5070-4.x-HBase-1.3.03.patch, 
> PHOENIX-5070.patch
>
>
> PhoenixAccessController populates accessControllers during calls like 
> loadTable before it checks if current user has all required permission for 
> given Hbase table and schema. 
> With [Phoenix-4661|https://issues.apache.org/jira/browse/PHOENIX-4661] , We 
> somehow removed this for only preGetTable func call. Because of this when we 
> upgrade Phoenix from 4.13.0 to 4.14.1 , we get NPE for accessControllers in 
> PhoenixAccessController#getUserPermissions. 
>  Here is exception stack trace :- 
>  
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>  org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NullPointerException
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:598)
> at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16357)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2208)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2190)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:409)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:403)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
> at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:453)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:434)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:403)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:482)
> at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:104)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:161)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:81)
> at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
> at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:563)
> ... 9 more
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1291)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:35542)
> at 
> 

[jira] [Closed] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5094.
---

Bulk closing jiras for the 4.14.2 relase.

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5173) LIKE and ILIKE statements return empty result list for search without wildcard

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5173.
---

Bulk closing jiras for the 4.14.2 relase.

> LIKE and ILIKE statements return empty result list for search without wildcard
> --
>
> Key: PHOENIX-5173
> URL: https://issues.apache.org/jira/browse/PHOENIX-5173
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Emiliia Nesterovych
>Assignee: Swaroopa Kadam
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5173.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5173.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I expect these two statements to return same result, as MySql does:
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME = 'Some Name';
> {code}
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME LIKE 'Some Name';
> {code}
> But while there is data for these scripts, the statement with "LIKE" operator 
> returns empty result set. Same affects "ILIKE" operator. 
>  Create table SQL is:
> {code:java}
> CREATE SCHEMA IF NOT EXISTS my_schema;
> CREATE TABLE my_schema.user (USER_NAME VARCHAR(255), ID BIGINT NOT NULL 
> PRIMARY KEY);{code}
> Fill up query:
> {code:java}
> UPSERT INTO my_schema.user VALUES('Some Name', 1);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5123) Avoid using MappedByteBuffers for server side GROUP BY

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5123.
---

Bulk closing jiras for the 4.14.2 relase.

> Avoid using MappedByteBuffers for server side GROUP BY
> --
>
> Key: PHOENIX-5123
> URL: https://issues.apache.org/jira/browse/PHOENIX-5123
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: 5123-4.x-W.I.P.txt, 5123-4.x-v1.txt
>
>
> Like PHOENIX-5120 but for GROUP BY.
> Solution is a bit more tricky, since outline for sorting the access here is 
> truly random.
> [~apurtell] suggests to perhaps just use a RandomAccessFile for this.
> (I'm not sure that uses under the hood, though)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5048) Index Rebuilder does not handle INDEX_STATE timestamp check for all index

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5048.
---

Bulk closing jiras for the 4.14.2 relase.

> Index Rebuilder does not handle INDEX_STATE timestamp check for all index
> -
>
> Key: PHOENIX-5048
> URL: https://issues.apache.org/jira/browse/PHOENIX-5048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 5.0.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5048.patch, PHOENIX-5048.v2.patch, 
> PHOENIX-5048.v3.patch, PHOENIX-5048.v4.patch, PHOENIX-5048.v5.patch
>
>
> After rebuilder is finished for Partial Index Rebuild, It will check if Index 
> state has been updated after Upper bound of the scan we use in partial index 
> Rebuild. If that happens then it will fail Index Rebuild as Index write 
> failure occured while we were rebuilding Index.
> {code:java}
> MetaDataEndpointImpl.java#updateIndexState()
> public void updateIndexState(RpcController controller, 
> UpdateIndexStateRequest request,
> RpcCallback done) {
> ...
> // If the index status has been updated after the upper bound of the scan we 
> use
> // to partially rebuild the index, then we need to fail the rebuild because an
> // index write failed before the rebuild was complete.
> if (actualTimestamp > expectedTimestamp) {
> builder.setReturnCode(MetaDataProtos.MutationCode.UNALLOWED_TABLE_MUTATION);
> builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
> done.run(builder.build());
> return;
> }
> ...
> }{code}
> After Introduction of TrackingParallelWriterIndexCommitter 
> [PHOENIX-3815|https://issues.apache.org/jira/browse/PHOENIX-3815], we only 
> disable Index which get failure . Before that , in 
> ParallelWriterIndexCommitter we were disabling all index even if Index 
> failure happens for one Index only. 
> Suppose Data Table has 3 index and above condition becomes true for first 
> index , then we won't even check for remain two Index.
> {code:java}
> MetaDataRegionObserver.java#BuildIndexScheduleTask.java#run()
> for (PTable indexPTable : indexesToPartiallyRebuild) {
> String indexTableFullName = SchemaUtil.getTableName(
> indexPTable.getSchemaName().getString(),
> indexPTable.getTableName().getString());
> if (scanEndTime == latestUpperBoundTimestamp) {
> IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L, 
> latestUpperBoundTimestamp);
> batchExecutedPerTableMap.remove(dataPTable.getName());
> LOG.info("Making Index:" + indexPTable.getTableName() + " active after 
> rebuilding");
> } else {
> // Increment timestamp so that client sees updated disable timestamp
> IndexUtil.updateIndexState(conn, indexTableFullName, 
> indexPTable.getIndexState(), scanEndTime * signOfDisableTimeStamp, 
> latestUpperBoundTimestamp);
> Long noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
> if (noOfBatches == null) {
> noOfBatches = 0l;
> }
> batchExecutedPerTableMap.put(dataPTable.getName(), ++noOfBatches);
> LOG.info("During Round-robin build: Successfully updated index disabled 
> timestamp for "
> + indexTableFullName + " to " + scanEndTime);
> }
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4296) Dead loop in HBase reverse scan when amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4296.
---

Bulk closing jiras for the 4.14.2 relase.

> Dead loop in HBase reverse scan when amount of scan data is greater than 
> SCAN_RESULT_CHUNK_SIZE
> ---
>
> Key: PHOENIX-4296
> URL: https://issues.apache.org/jira/browse/PHOENIX-4296
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: rukawakang
>Assignee: Chen Feng
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4296-4.x-HBase-1.2-v2.patch, 
> PHOENIX-4296-4.x-HBase-1.2-v3.patch, PHOENIX-4296-4.x-HBase-1.2-v4.patch, 
> PHOENIX-4296-4.x-HBase-1.2.patch, PHOENIX-4296.patch
>
>
> This problem seems to only occur with reverse scan not forward scan. When 
> amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE(default 2999), 
> Class ChunkedResultIteratorFactory will multiple calls function 
> getResultIterator. But in function getResultIterator it always readjusts 
> startRow, in fact, if in reverse scan we should readjust stopRow. For example 
> {code:java}
> if (ScanUtil.isReversed(scan)) {
> scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> } else {
> scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5008) CQSI.init should not bubble up RetriableUpgradeException to client in case of an UpgradeRequiredException

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5008.
---

Bulk closing jiras for the 4.14.2 relase.

> CQSI.init should not bubble up RetriableUpgradeException to client in case of 
> an UpgradeRequiredException
> -
>
> Key: PHOENIX-5008
> URL: https://issues.apache.org/jira/browse/PHOENIX-5008
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5008-4.x-HBase-1.3_addendum.patch, 
> PHOENIX-5008.patch
>
>
> Inside _ConnectionQueryServicesImpl_._init_, if we catch a 
> _RetriableUpgradeException_, we re-throw this exception. In its caller 
> methods for example, in _PhoenixDriver.getConnectionQueryServices_, this is 
> caught as a _SQLException_, and this fails the initialization of the 
> ConnectionQueryServices and removes the new CQS object from the cache. 
> In the case that the _RetriableUpgradeException_ is an instance of an 
> _UpgradeNotRequiredException_ or an _UpgradeInProgressException_, this can 
> only occur when we attempt to upgrade system tables, either wrongly or 
> concurrently when there is an ongoing attempt for the same. In this case, it 
> is fine to bubble the exception up to the end client and the client will 
> subsequently have to re-attempt to create a connection (calling CQS.init 
> again).
> However, if the _RetriableUpgradeException_ is an instance of an 
> _UpgradeRequiredException_,  the end-client will never be able to get a 
> connection and thus will never be able to manually run "EXECUTE UPGRADE". In 
> this case, instead of re-throwing the exception, we should log that the 
> client must manually run "EXECUTE UPGRADE" before being able to run any other 
> commands and let the CQS.init succeed. Thus, the client will get a connection 
> which has "upgradeRequired" set and this connection will fail for any query 
> except "EXECUTE UPGRADE".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4781.
---

Bulk closing jiras for the 4.14.2 relase.

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.4.x-HBase-1.4.v4.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v5.patch, PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5207) Create index if not exists fails incorrectly if table has 'maxIndexesPerTable' indexes already

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5207.
---

Bulk closing jiras for the 4.14.2 relase.

> Create index if not exists fails incorrectly if table has 
> 'maxIndexesPerTable' indexes already 
> ---
>
> Key: PHOENIX-5207
> URL: https://issues.apache.org/jira/browse/PHOENIX-5207
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5207-4.14-HBase-1.4.patch, 
> PHOENIX-5207-master.patch
>
>
> If a table has 'maxIndexesPerTable' indexes and we try to create another one 
> and if it already exists we should not throw 'ERROR 1047 (43A04): Too many 
> indexes have already been created' since weve put 'if not exists' already in 
> the statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5069) Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5069.
---

Bulk closing jiras for the 4.14.2 relase.

> Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache
> ---
>
> Key: PHOENIX-5069
> URL: https://issues.apache.org/jira/browse/PHOENIX-5069
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.001.patch, 
> PHOENIX-5069-4.14.1-hbase-1.3-phoenix-stats.002.patch, 
> PHOENIX-5069.4.x-HBase-1.3.001.patch, PHOENIX-5069.4.x-HBase-1.4.001.patch, 
> PHOENIX-5069.master.001.patch, PHOENIX-5069.master.002.patch, 
> PHOENIX-5069.master.003.patch, PHOENIX-5069.master.004.patch, 
> PHOENIX-5069.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> The current Phoenix Stats Cache uses TTL based eviction policy. A cached 
> entry will expire after a given amount of time (900s by default) passed since 
> the entry's been created. This will lead to cache miss when 
> Compiler/Optimizer fetches stats from cache at the next time. As you can see 
> from the above graph, fetching stats from the cache is a blocking operation — 
> when there is cache miss, it has a round trip over the wire to scan the 
> SYSTEM.STATS Table and to get the latest stats info, rebuild the cache and 
> finally return the stats to the Compiler/Optimizer. Whenever there is a cache 
> miss, this blocking call causes significant performance penalty and see 
> periodic spikes.
> *This Jira suggests to use asynchronous refresh mechanism to provide a 
> non-blocking cache. For details, please see the linked design document below.*
> [~karanmehta93] [~twdsi...@gmail.com] [~dbwong] [~elserj] [~an...@apache.org] 
> [~sergey soldatov] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4870) LoggingPhoenixConnection should log metrics when AutoCommit is set to True.

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4870.
---

Bulk closing jiras for the 4.14.2 relase.

> LoggingPhoenixConnection should log metrics when AutoCommit is set to True.
> ---
>
> Key: PHOENIX-4870
> URL: https://issues.apache.org/jira/browse/PHOENIX-4870
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4870-4.x-HBase-1.4.patch, PHOENIX-4870.patch
>
>
> When LoggingPhoenixConnection calls commit or close, metrics logs are written 
> properly, however, when LoggingPhoenixConnection is explicitly set with 
> AutoCommit as true, metrics don't get logged at all. This bug can also be 
> tested by adding the following test scenario in PhoenixLoggingMetricsIT.java 
> class. 
> {code:java}
> @Test
> public void testPhoenixMetricsLoggedOnAutoCommit() throws Exception {
> // Autocommit is turned on explicitly
> loggedConn.setAutoCommit(true);
> //with executeUpdate() method
> // run SELECT to verify read metrics are logged
> String query = "SELECT * FROM " + tableName1;
> verifyQueryLevelMetricsLogging(query);
> // run UPSERT SELECT to verify mutation metrics are logged
> String upsertSelect = "UPSERT INTO " + tableName2 + " SELECT * FROM " + 
> tableName1;
> loggedConn.createStatement().executeUpdate(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> //with execute() method
> loggedConn.createStatement().execute(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> clearAllTestMetricMaps();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5137.
---

Bulk closing jiras for the 4.14.2 relase.

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5137-4.14-HBase-1.3.02.patch, 
> PHOENIX-5137-4.14-Hbase-1.3.01.patch, PHOENIX-5137-4.14-Hbase-1.3.01.patch, 
> PHOENIX-5137-4.x-HBase-1.3.01.patch, PHOENIX-5137-master.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> checkForRegionClosing();
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Steps to reproduce 
> 1. Create a table with one index (startime) 
> 2. Add 1-2 million rows 
> 3. Wait till the index is active 
> 4. Disable the index with start time (noted in step 1) 
> 5. Once the rebuilder starts split data table region 
> Repeat the steps again after applying the patch to check the difference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4835) LoggingPhoenixConnection should log metrics upon connection close

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4835.
---

Bulk closing jiras for the 4.14.2 relase.

> LoggingPhoenixConnection should log metrics upon connection close
> -
>
> Key: PHOENIX-4835
> URL: https://issues.apache.org/jira/browse/PHOENIX-4835
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4835.4.x-HBase-1.4.001.patch, 
> PHOENIX-4835.4.x-HBase-1.4.002.patch
>
>
> {{LoggingPhoenixConnection}} currently logs metrics upon {{commit()}}, which 
> may miss the logging of metrics sometimes if commit is never called. We 
> should move it to {{close()}} method instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5217) Incorrect result for COUNT DISTINCT limit

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5217.
---

Bulk closing jiras for the 4.14.2 relase.

> Incorrect result for COUNT DISTINCT limit 
> --
>
> Key: PHOENIX-5217
> URL: https://issues.apache.org/jira/browse/PHOENIX-5217
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: 4.14.1: incorrect
> 4.6: correct.
>  
>Reporter: Chen Feng
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5217-4.14-HBase-1.4.patch, 
> PHOENIX-5217_v1-4.x-HBase-1.4.patch, PHOENIX-5217_v2-master.patch
>
>
> For table t1(pk1, col1, CONSTRAINT(pk1))
> upsert into "t1" values (1, 1);
>  upsert into "t1" values (2, 2);
> sql A: select count("pk1") from "t1" limit 1, return 2 [correct]
> sql B: select count(disctinct("pk1")) from "t1" limit 1, return 1 [incorrect]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5005.
---

Bulk closing jiras for the 4.14.2 relase.

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch, PHOENIX-5005.4.x-HBase-1.4.v3.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4750) Resolve server customizers and provide them to Avatica

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4750.
---

Bulk closing jiras for the 4.14.2 relase.

> Resolve server customizers and provide them to Avatica
> --
>
> Key: PHOENIX-4750
> URL: https://issues.apache.org/jira/browse/PHOENIX-4750
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
>  Labels: queryserver
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4750.patch, PHOENIX-4750.v2.patch, 
> PHOENIX-4750.v3.patch, PHOENIX-4750.v4.patch, PHOENIX-4750.v5.patch
>
>
> CALCITE-2284 allows finer grained customization of the underlying Avatica 
> HttpServer.
> Resolve server customizers on the PQS classpath and provide them to the 
> HttpServer builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-3413) Ineffective null check in LiteralExpression#newConstant()

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-3413.
---

Bulk closing jiras for the 4.14.2 relase.

> Ineffective null check in LiteralExpression#newConstant()
> -
>
> Key: PHOENIX-3413
> URL: https://issues.apache.org/jira/browse/PHOENIX-3413
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.1
>Reporter: Ted Yu
>Assignee: Kevin Liew
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-3413.2.patch, PHOENIX-3413.3.patch, 
> PHOENIX-3413.patch
>
>
> {code}
> if (maxLength == null) {
> maxLength = type == null || !type.isFixedWidth() ? null : 
> type.getMaxLength(value);
> }
> {code}
> The null check for type is ineffective - type is de-referenced in various 
> places prior to the above check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5172.
---

Bulk closing jiras for the 4.14.2 relase.

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: phoenix-5172-4.x-1.3.patch, 
> phoenix-5172.4.x-HBase-1.3.v1.patch, phoenix-5172.4.x-HBase-1.3.v2.patch, 
> phoenix-5172.4.x-HBase-1.3.v3.patch, phoenix-5172.4.x-HBase-1.3.v4.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5101) ScanningResultIterator getScanMetrics throws NPE

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5101.
---

Bulk closing jiras for the 4.14.2 relase.

> ScanningResultIterator getScanMetrics throws NPE
> 
>
> Key: PHOENIX-5101
> URL: https://issues.apache.org/jira/browse/PHOENIX-5101
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Reid Chan
>Assignee: Karan Mehta
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5101.414-HBase-1.4.001.patch, PHOENIX-5101.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:92)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:79)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439)
>   at 
> org.apache.phoenix.iterate.MergeSortResultIterator.close(MergeSortResultIterator.java:44)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:176)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:807)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:759)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:206)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:927)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:879)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5194) Thread Cache is not update for Index retries in for MutationState#send()#doMutation()

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5194.
---

Bulk closing jiras for the 4.14.2 relase.

> Thread Cache is not update for Index retries in for 
> MutationState#send()#doMutation()
> -
>
> Key: PHOENIX-5194
> URL: https://issues.apache.org/jira/browse/PHOENIX-5194
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0, 5.0.0, 4.15.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Major
>  Labels: client
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5194-4.x-HBase-1.3.01.patch, 
> PHOENIX-5194-4.x-HBase-1.3.02.patch, PHOENIX-5194-4.x-HBase-1.3.03.patch, 
> PHOENIX-5194-4.x-HBase-1.3.04.patch, PHOENIX-5194-4.x-HBase-1.3.05.patch, 
> PHOENIX-5194-4.x-HBase-1.3.06.patch, PHOENIX-5194.patch
>
>
> Wwhen Client is writing and Index Failures happens, MutationState#send() will 
> use PhoenixIndexFailurePolicy#doBatchWithRetries to apply index mutations. If 
> during this retires Index region and Data table region moves , Index/Data 
> table region location cache does not get updated. Because of this client is 
> keep trying to write in same location and get failures. After all retries are 
> finished, it will simply disable Index and aborts the client thread.
> {noformat}
> 2019-03-08 09:41:32,678 WARN [pool-8-thread-25] execute.MutationState - 
> THREAD_ABORT MutationState#send(Iterator) :-
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 36 actions: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
> key=1873403620592046670 
> region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
>  Index update failed
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:112)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
> at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
> at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
> at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:79)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:385)
> at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:345)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1025)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1727)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1021)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3309)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3076)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:914)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:842)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2397)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35080)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata. key=1873403620592046670 
> region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> 

[jira] [Closed] (PHOENIX-4755) Provide an option to plugin custom avatica server config in PQS

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4755.
---

Bulk closing jiras for the 4.14.2 relase.

> Provide an option to plugin custom avatica server config in PQS
> ---
>
> Key: PHOENIX-4755
> URL: https://issues.apache.org/jira/browse/PHOENIX-4755
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
>  Labels: queryserver
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4755.001.diff, PHOENIX-4755.002.diff, 
> PHOENIX-4755.003.diff, PHOENIX-4755.4.x-HBase-1.4.patch
>
>
> CALCITE-2294 Allow customization for {{AvaticaServerConfiguration}} for 
> plugging new authentication mechanisms
> Add a new Phoenix level property and provide resolve the class using 
> {{InstanceResolver}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5131) Make spilling to disk for order/group by configurable

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5131.
---

Bulk closing jiras for the 4.14.2 relase.

> Make spilling to disk for order/group by configurable
> -
>
> Key: PHOENIX-5131
> URL: https://issues.apache.org/jira/browse/PHOENIX-5131
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5131-4.x-HBase-1.2.patch, 
> PHOENIX-5131-4.x-HBase-1.3.patch, PHOENIX-5131-4.x-HBase-1.4.patch, 
> PHOENIX-5131-master-v2.patch, PHOENIX-5131-master-v2.patch, 
> PHOENIX-5131-master-v3.patch, PHOENIX-5131-master-v4.patch, 
> PHOENIX-5131-master.patch, PHOENIX-5131-master.patch
>
>
> We've observed that large queries, doing order/group by leading to issues on 
> the regionserver (crashes/long gc pauses/file handler exhaustion etc.). We 
> should make spilling to disk configurable and in case its disabled, fail the 
> query once it hits the spilling limit on any of the region servers. Also make 
> spooling threshold server-side property only to prevent clients from 
> controlling memory allocation on the rs side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5195) PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5195.
---

Bulk closing jiras for the 4.14.2 relase.

> PHERF:- Handle batch failure in connection.commit() in  
> WriteWorkload#upsertData
> 
>
> Key: PHOENIX-5195
> URL: https://issues.apache.org/jira/browse/PHOENIX-5195
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0, 5.0.0, 4.15.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Minor
>  Labels: pherf
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5195-4.x-HBase-1.3.01.patch
>
>
> For Pherf tool,  If WriteWorkload#upsertBatch faces any exception in 
> connection.commit() during batch writes, pherf does not handle exception and 
> aborts the thread. 
> Ref, [PHOENIX-5092|https://issues.apache.org/jira/browse/PHOENIX-5092] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5266) Client can only write on Index Table and skip data table if failure happens because of region split/move etc

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5266.
---

Bulk closing jiras for the 4.14.2 relase.

> Client can only write on Index Table and skip data table if failure happens 
> because of region split/move etc
> 
>
> Key: PHOENIX-5266
> URL: https://issues.apache.org/jira/browse/PHOENIX-5266
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0, 4.14.2
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5266-4.x-HBase-1.3.01.patch, 
> PHOENIX-5266-4.x-HBase-1.3.02.patch, PHOENIX-5266.01.patch, 
> PHOENIX-5266.patch, PHOENIX-5266.patch
>
>
> With Phoenix 4.14.1 client, There is a scenario where client would skip data 
> table write but do successful index table write. In this case, we should 
> treat it as Data loss scenario.
>  
> Relevant code path :-
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L994-L1043]
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1089-L1109]
>  
> Here is what happens :-
>  * Consider below assumptions for scenario :- 
>  ** max no row in single batch = 100
>  ** max size of batch = 2 MB
>  * When client faces SQLException Code 1121, it sets variable 
> shouldRetryIndexedMutation=true.
>  * In scenarios where client sends batch of 100 rows only as per 
> configuration, but batch size is >2 MB, MutationState.java#991 will split 
> this 100 row batch into multiple smaller batches which are <2MB.
>  ** MutationState.java#991 :- 
> [https://github.com/apache/phoenix/blob/4.x-HBase-1.3/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L991]
>  * Suppose there are 5 batches of 20 rows but client faces 1121 
> SQLExceptionCode on 2nd batch , then it will set 
> shouldRetryIndexedMutation=true and it will retry all 5 batches again with 
> only Index updates. This will results in rows missing from Data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5055) Split mutations batches probably affects correctness of index data

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5055.
---

Bulk closing jiras for the 4.14.2 relase.

> Split mutations batches probably affects correctness of index data
> --
>
> Key: PHOENIX-5055
> URL: https://issues.apache.org/jira/browse/PHOENIX-5055
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 5.1.0, 4.14.2, 5.0.1
>
> Attachments: ConcurrentTest.java, 
> PHOENIX-5055-4.x-HBase-1.4-v2.patch, PHOENIX-5055-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5055-4.x-HBase-1.4-v4.patch, PHOENIX-5055-v4.x-HBase-1.4.patch
>
>
> In order to get more performance, we split the list of mutations into 
> multiple batches in MutationSate.  For one upsert SQL with some null values 
> that will produce two type KeyValues(Put and DeleteColumn),  These KeyValues 
> should have the same timestamp so that keep on an atomic operation for 
> corresponding the row key.
> [^ConcurrentTest.java] produced some random upsert/delete SQL and 
> concurrently executed, some SQL snippets as follows:
> {code:java}
> 1149:UPSERT INTO ConcurrentReadWritTest(A,C,E,F,G) VALUES 
> ('3826','2563','3052','3170','3767');
> 1864:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('2563','4926','3526','678',null,null,'1617');
> 2332:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,E,F,G) VALUES 
> ('1052','2563','1120','2314','1456',null,null);
> 2846:UPSERT INTO ConcurrentReadWritTest(A,B,C,D,G) VALUES 
> ('1922','146',null,'469','2563');
> 2847:DELETE FROM ConcurrentReadWritTest WHERE A = '2563’;
> {code}
> Found incorrect indexed data for the index tables by sqlline.
> !https://gw.alicdn.com/tfscom/TB1nSDqpxTpK1RjSZFGXXcHqFXa.png|width=665,height=400!
> Debugged the mutations of batches on the server side. the DeleteColumns and 
> Puts were splitted into the different batches for the once upsert,  the 
> DeleteFaimly also was executed by another thread.  due to DeleteColumns's 
> timestamp is larger than DeleteFaimly under multiple threads.
> !https://gw.alicdn.com/tfscom/TB1frHmpCrqK1RjSZK9XXXyypXa.png|width=901,height=120!
>  
> Running the following:
> {code:java}
> conn.createStatement().executeUpdate( "CREATE TABLE " + tableName + " (" + "A 
> VARCHAR NOT NULL PRIMARY KEY," + "B VARCHAR," + "C VARCHAR," + "D VARCHAR) 
> COLUMN_ENCODED_BYTES = 0"); 
> conn.createStatement().executeUpdate("CREATE INDEX " + indexName + " on " + 
> tableName + " (C) INCLUDE(D)"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A2','B2','C2','D2')"); 
> conn.createStatement().executeUpdate("UPSERT INTO " + tableName + "(A,B,C,D) 
> VALUES ('A3','B3', 'C3', null)");
> {code}
> dump IndexMemStore:
> {code:java}
> hbase.index.covered.data.IndexMemStore(117): 
> Inserting:\x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(133): Current kv state: 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:B/1542190446167/Put/vlen=2/seqid=5/value=B3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:C/1542190446167/Put/vlen=2/seqid=5/value=C3 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:D/1542190446218/DeleteColumn/vlen=0/seqid=0/value= 
> phoenix.hbase.index.covered.data.IndexMemStore(135): KV: 
> \x01A3/0:_0/1542190446167/Put/vlen=1/seqid=5/value=x 
> phoenix.hbase.index.covered.data.IndexMemStore(137): == END MemStore 
> Dump ==
> {code}
>  
> The DeleteColumn's timestamp larger than other mutations.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5126) RegionScanner leak leading to store files not getting cleared

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5126.
---

Bulk closing jiras for the 4.14.2 relase.

> RegionScanner leak leading to store files not getting cleared
> -
>
> Key: PHOENIX-5126
> URL: https://issues.apache.org/jira/browse/PHOENIX-5126
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5126-master.patch
>
>
> Having a regionScanner open indefinitely (due to any error condition before 
> the close) leads to the store files not getting cleared after compaction 
> since the already open scanner will have the store file referenced. Any 
> subsequent flushed files for the region also get opened by the scanner and 
> wont be cleared.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5080) Index becomes Active during Partial Index Rebuilder if Index Failure happens

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5080.
---

Bulk closing jiras for the 4.14.2 relase.

> Index becomes Active during Partial Index Rebuilder if Index Failure happens
> 
>
> Key: PHOENIX-5080
> URL: https://issues.apache.org/jira/browse/PHOENIX-5080
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5080-4.x-HBase-1.3.01.patch, 
> PHOENIX-5080-4.x-HBase-1.3.02.patch, PHOENIX-5080-4.x-HBase-1.3.02.patch, 
> PHOENIX-5080-4.x-HBase-1.3.03.patch, PHOENIX-5080-4.x-HBase-1.3.04.patch, 
> PHOENIX-5080-4.x-HBase-1.3.05.patch, PHOENIX-5080-4.x-HBase-1.3.06.patch, 
> PHOENIX-5080-4.x-HBase-1.3.06.patch, PHOENIX-5080.01.patch, 
> PHOENIX-5080.01.patch
>
>
> After PHOENIX-4130 and PHOENIX-4600 , If there is Index failure during 
> Partial Index Rebuild, Rebuilder will try again to write Index updates. If it 
> succeeds then it will transition Index from INACTIVE to ACTIVE, even before 
> Rebuilder finishes.
> Here is where it goes wrong, I think :- 
> {code:java}
> PhoenixIndexFailurePolicy.java :- 
> public static void doBatchWithRetries(MutateCommand mutateCommand,
>             IndexWriteException iwe, PhoenixConnection connection, 
> ReadOnlyProps config) throws IOException {
> 
> while (canRetryMore(numRetry++, maxTries, canRetryUntil)) {
> ...
> handleIndexWriteSuccessFromClient(iwe, connection);
> ...
> }
> }
> 
> private static void handleIndexWriteSuccessFromClient(IndexWriteException 
> indexWriteException, PhoenixConnection conn) {
>         handleExceptionFromClient(indexWriteException, conn, 
> PIndexState.ACTIVE);
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5199) Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5199.
---

Bulk closing jiras for the 4.14.2 relase.

> Pherf overrides user provided properties like dataloader threadpool, monitor 
> frequency etc with pherf.properties
> 
>
> Key: PHOENIX-5199
> URL: https://issues.apache.org/jira/browse/PHOENIX-5199
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0, 4.14.1
>Reporter: Mihir Monani
>Assignee: Mihir Monani
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5199-4.x-HBase-1.3.01.patch
>
>
> Pherf tool gives options to provide run time argument like dataloader 
> threadpool, monitor frequency.
> Currently Pherf tool overrides user provided values with values in 
> pherf.properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5018) Index mutations created by UPSERT SELECT will have wrong timestamps

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5018.
---

Bulk closing jiras for the 4.14.2 relase.

> Index mutations created by UPSERT SELECT will have wrong timestamps
> ---
>
> Key: PHOENIX-5018
> URL: https://issues.apache.org/jira/browse/PHOENIX-5018
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5018.4.x-HBase-1.3.001.patch, 
> PHOENIX-5018.4.x-HBase-1.3.002.patch, PHOENIX-5018.4.x-HBase-1.4.001.patch, 
> PHOENIX-5018.4.x-HBase-1.4.002.patch, PHOENIX-5018.master.001.patch, 
> PHOENIX-5018.master.002.patch, PHOENIX-5018.master.003.patch, 
> PHOENIX-5018.master.004.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When doing a full rebuild (or initial async build) of a local or global index 
> using IndexTool and PhoenixIndexImportDirectMapper, or doing a synchronous 
> initial build of a global index using the index create DDL, we generate the 
> index mutations by using an UPSERT SELECT query from the base table to the 
> index.
> The timestamps of the mutations use the default HBase behavior, which is to 
> take the current wall clock. However, the timestamp of an index KeyValue 
> should use the timestamp of the initial KeyValue in the base table.
> Having base table and index timestamps out of sync can cause all sorts of 
> weird side effects, such as if the base table has data with an expired TTL 
> that isn't expired in the index yet. Also inserting old mutations with new 
> timestamps may overwrite the data that has been newly overwritten by the 
> regular data path during index build, which would lead to data loss and 
> inconsistency issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5111) IndexTool gives NPE when trying to do a direct build without an output-path set

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5111.
---

Bulk closing jiras for the 4.14.2 relase.

> IndexTool gives NPE when trying to do a direct build without an output-path 
> set
> ---
>
> Key: PHOENIX-5111
> URL: https://issues.apache.org/jira/browse/PHOENIX-5111
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Gokcen Iskender
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5111.patch, PHOENIX-5111.patch
>
>
> The IndexTool has several modes. If the -direct or -partial-rebuild flags are 
> not set, the tool assumes the user wants to rebuild the index by creating 
> HFiles and then bulk-loading them back into HBase, and requires an extra 
> -output-path flag to deterine where the temporary HFiles should live. 
> In practice, we've found that -direct mode (which loads using HBase Puts) is 
> quicker. However, even though there's logic to not require the -output-path 
> flag when -direct mode is chosen, the IndexTool will throw an NPE if it's not 
> present. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5243) PhoenixResultSet#next() closes the result set if scanner returns null

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5243.
---

Bulk closing jiras for the 4.14.2 relase.

> PhoenixResultSet#next() closes the result set if scanner returns null
> -
>
> Key: PHOENIX-5243
> URL: https://issues.apache.org/jira/browse/PHOENIX-5243
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5243.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5243.4.x-HBase-1.3.v2.patch, PHOENIX-5243.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5243.4.x-HBase-1.3.v4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5184) HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5184.
---

Bulk closing jiras for the 4.14.2 relase.

> HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and 
> PhoenixConfigurationUtil
> -
>
> Key: PHOENIX-5184
> URL: https://issues.apache.org/jira/browse/PHOENIX-5184
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5184-4.x-HBase-1.3-v1.patch, 
> PHOENIX-5184-4.x-HBase-1.3.patch, PHOENIX-5184-v1.patch, PHOENIX-5184.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I was debugging a connection leak issue and ran into a few areas where there 
> are connection leaks. I decided to take a broader look overall and see if 
> there were other places where we leak connections and found some candidates. 
> This is by no means an exhaustive search for connection leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-5025) Tool to clean up orphan views

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-5025.
---

Bulk closing jiras for the 4.14.2 relase.

> Tool to clean up orphan views
> -
>
> Key: PHOENIX-5025
> URL: https://issues.apache.org/jira/browse/PHOENIX-5025
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 5.0.0, 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5025.master.0001.patch, 
> PHOENIX-5025.master.0002.patch, PHOENIX-5025.master.patch
>
>
> A view without its base table is an orphan view. Since views are virtual 
> tables and their data is stored in their base tables, they are useless when 
> they become orphan. A base table can have child views, grandchild views and 
> so on. Due to some reasons/bugs, when a base table was dropped, its views 
> were not not properly cleaned up in the past. For example, the drop table 
> code did not support cleaning up grandchild views. This has been recently 
> fixed by PHOENIX-4764. Although PHOENIX-4764 prevents new orphan views due to 
> table drop operations, it does not clean up existing orphan views. It is also 
> believed that when the system catalog table was split due to a bug in the 
> past, it also contributed to creating orphan views as Phoenix did not support 
> splittable system catalog. Therefore, Phoenix needs a tool to clean up orphan 
> views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4989) Include disruptor jar in shaded dependency

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4989.
---

Bulk closing jiras for the 4.14.2 relase.

> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same.
> As a result we are not able to run any MR job like IndexScrutinity or 
> IndexTool using phoenix on hbase 1.3 onwards cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4834) PhoenixMetricsLog interface methods should not depend on specific logger

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva closed PHOENIX-4834.
---

Bulk closing jiras for the 4.14.2 relase.

> PhoenixMetricsLog interface methods should not depend on specific logger
> 
>
> Key: PHOENIX-4834
> URL: https://issues.apache.org/jira/browse/PHOENIX-4834
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4834.4.x-HBase-1.4.001.patch, 
> PHOENIX-4834.4.x-HBase-1.4.002.patch
>
>
> {{PhoenixMetricsLog}} is an interface that provides a wrapper around various 
> JDBC objects with logging functionality upon close/commit. The methods take 
> in {{Logger}} as an input, which is {{org.slf4j.Logger}}. A better way of 
> doing is that the interface should just pass the metrics and allow the user 
> to configure and use whatever logging library they want to use.
> This Jira will deprecate the older methods by provide a default 
> implementation for them and add the new methods.
> Ideally we should have provided default implementations, but since we are on 
> Java 7, we are unable to do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5122:

Fix Version/s: (was: 4.14.2)
   4.14.3

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 5.0.1, 4.14.3
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5300) NoopStatisticsCollector shouldn't scan any rows.

2019-05-30 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5300:

Fix Version/s: (was: 4.14.2)
   4.14.3

> NoopStatisticsCollector shouldn't scan any rows.
> 
>
> Key: PHOENIX-5300
> URL: https://issues.apache.org/jira/browse/PHOENIX-5300
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Fix For: 4.14.3
>
> Attachments: PHOENIX-5300-4.14-HBase-1.4.patch, 
> PHOENIX-5300-4.14-HBase-1.4.patch
>
>
> Today if we disable calculation of stats via 
> {{phoenix.stats.collection.enabled}} property, it creates 
> {{NoopStatisticsCollector}}. If someone calls "UPDATE STATISTICS 
> ", it will create NoopStatisticsCollector, scan the whole table 
> and does nothing. This is fixed in 4.15 via PHOENIX-4009. If we don't want to 
> backport PHOENIX-4009, we can fix this bug just in 4.14



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-05-29 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Fix Version/s: 5.1.0
   4.15.0

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-05-26 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-5122:
-

[~jisaac] I think the way the bitset is stored is not b/w compatible with a 
4.13 client. 

To Repro:
Server on 4.14
With global connection:
{code}
CREATE TABLE IF NOT EXISTS CED (
OID CHAR(15) NOT NULL, 
KP CHAR(3) NOT NULL, 
CONSTRAINT PK PRIMARY KEY (
OID, 
KP
)
) 
{code}

With tenant specific connection and client on 4.13:
{code}
CREATE VIEW IF NOT EXISTS "z00" (COL1 VARCHAR NOT NULL, COL2 DECIMAL NOT NULL, 
COL3 VARCHAR CONSTRAINT PK PRIMARY KEY (COL1, COL2)) AS SELECT * FROM CED WHERE 
KEY_PREFIX = 'z00'; 

UPSERT INTO "z00" (COL1, COL2, COL3) VALUES ('TEST', 25, 'Test Value');
-- following returns no rows
SELECT * FROM "z00" WHERE (COL2, COL1) IN ((25, 'TEST'),(30,'TEST'));
{code}

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.13.0, 4.13.1, 4.15.0, 4.14.1, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> 

[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-05-26 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5122:

Fix Version/s: (was: 4.14.1)
   (was: 4.13.1)
   (was: 4.13.0)

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2, 5.0.1
>
> Attachments: PHOENIX-5122-4.x-HBase-1.3.patch, PHOENIX-5122.patch, 
> Screen Shot 2019-03-04 at 6.17.42 PM.png, Screen Shot 2019-03-04 at 6.21.10 
> PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> {noformat}
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  {noformat}
> *4.14.1 client -> 4.14.1 server* 
> {noformat}
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >